The underlying truth of this joke is: Programming syntax is less confusing than mathematical syntax. There are genuinely ambiguous layouts of syntax in math (to a human reader that hasn’t internalized PEMDAS, anyways) whereas you get a compilation error if ANYTHING is ambiguous in programming. (yes, I am WELL aware of the frustrations of runtime errors)
But when I learned BEDMAS, my teacher directed us to do implied multiplication before other multiplication/division. Which, as far as I’m aware, is mathematically correct according to the proper order of operations (instead of whatever acronym summary you learned).
Before I get "umm. Acktually"d … I know that’s not the full picture of the order of operations as it should be in mathematics. But for the limited scope I learned of algebra from highschool, AFAIK, this is correct to the point that I have understanding of. I’m not a mathematician, and I work with computers all day long and they do the math for me when I need to do any of it. So higher understanding in my case is not helpful.
AFAIK, this is correct to the point that I have understanding of. I’m not a mathematician
I’m a Maths teacher/tutor. The actual rules are Terms and The Distributive Law. There is no such thing as “implicit multiplication” (which is usually people lumping the 2 separate rules together as one and ending up with wrong answers).
Order is often used to describe exponents when talking about functions and other mathematical properties. In a lot of cases, it’s also equivalent to a degree. For example, a function y = x² - 9 is a second-order/degree polynomial.
Alternatively, one could find a second-order rate of a reaction, which means the rate of reaction is proportional to the square of a solution’s concentration.
You have the right idea, and you are right in some regards. Generally the order of magnitude is an order of 10. That is, 1350 could be represented as 1.350×10³, so the order of magnitude is the third order of 10, which is 10³ (i.e. some value x×1000).
Every single Maths textbook I’ve seen teaches it correctly. The issue is people not remembering what they were taught (and then programming a calculator without checking it first). Calculators
Also: sometimes, a mathematician just has to invent some concept or syntax to convey something unconventional. The specific use of subscript/superscript, whatever ‘phi’ is being used for, etc. on whatever paper you’re reading doesn’t have to correlate to how other work uses the same concepts. It’s bad form, but sometimes its needed, and if useful enough is added to the general canon of what we call “math”. Meanwhile, you can encapsulate and obfuscate things in software, sure, but you can always get down to the bedrock of what the language supports; there’s no inventing anything new.
Yea, that’s it. Math syntax was created for humans, and programming syntax had to always remain deterministic for computers. It’s not an insult to either, just interesting how ambiguities show up often when humans are involved. I say ‘often’ for the general case: Math should be just as deterministic as programming, but it’s not in some situations.
The underlying truth of this joke is: Programming syntax is less confusing than mathematical syntax. There are genuinely ambiguous layouts of syntax in math (to a human reader that hasn’t internalized PEMDAS, anyways) whereas you get a compilation error if ANYTHING is ambiguous in programming. (yes, I am WELL aware of the frustrations of runtime errors)
Internalized PEMDAS without knowing it’s literally the same thing as BODMAS is exactly the problem!
what in the name of fuck is BODMAS
Same as PEMDAS, except:
Parentheses -> Bracket
Exponent -> Order
Multiplication <-> Division
BODMAS
I learned it as “BEDMAS”
Brackets
Exponents
(You can guess the rest)
But when I learned BEDMAS, my teacher directed us to do implied multiplication before other multiplication/division. Which, as far as I’m aware, is mathematically correct according to the proper order of operations (instead of whatever acronym summary you learned).
Before I get "umm. Acktually"d … I know that’s not the full picture of the order of operations as it should be in mathematics. But for the limited scope I learned of algebra from highschool, AFAIK, this is correct to the point that I have understanding of. I’m not a mathematician, and I work with computers all day long and they do the math for me when I need to do any of it. So higher understanding in my case is not helpful.
I’m a Maths teacher/tutor. The actual rules are Terms and The Distributive Law. There is no such thing as “implicit multiplication” (which is usually people lumping the 2 separate rules together as one and ending up with wrong answers).
order? how does that make sense? brackets alright ig
Order is often used to describe exponents when talking about functions and other mathematical properties. In a lot of cases, it’s also equivalent to a degree. For example, a function y = x² - 9 is a second-order/degree polynomial.
Alternatively, one could find a second-order rate of a reaction, which means the rate of reaction is proportional to the square of a solution’s concentration.
Order of magnitude? Thinking out loud.
You have the right idea, and you are right in some regards. Generally the order of magnitude is an order of 10. That is, 1350 could be represented as 1.350×10³, so the order of magnitude is the third order of 10, which is 10³ (i.e. some value x×1000).
It’s actually short for “to the order of”, as in 2 squared is 2 to the order of 2. i.e. same thing as Exponent or Index.
It’s actually short for “to the order of”, as in 2 squared is 2 to the order of 2. i.e. same thing as Exponent or Index.
I mean … yea. The exact problem is math is not taught correctly. Order of operations make total logical sense for what the operations are doing.
The problem only arises when people don’t come to all of the appropriate conclusions on their own.
Every single Maths textbook I’ve seen teaches it correctly. The issue is people not remembering what they were taught (and then programming a calculator without checking it first). Calculators
So better do higher math in Python? I agree.
Python isn’t the only programming language.
But a quite common pl in science.
Counterpoint: C function pointers (or just C in general)
Also: sometimes, a mathematician just has to invent some concept or syntax to convey something unconventional. The specific use of subscript/superscript, whatever ‘phi’ is being used for, etc. on whatever paper you’re reading doesn’t have to correlate to how other work uses the same concepts. It’s bad form, but sometimes its needed, and if useful enough is added to the general canon of what we call “math”. Meanwhile, you can encapsulate and obfuscate things in software, sure, but you can always get down to the bedrock of what the language supports; there’s no inventing anything new.
Yea, that’s it. Math syntax was created for humans, and programming syntax had to always remain deterministic for computers. It’s not an insult to either, just interesting how ambiguities show up often when humans are involved. I say ‘often’ for the general case: Math should be just as deterministic as programming, but it’s not in some situations.
Maths is 100% deterministic for order of operations. The issue is people not following all of the rules. Order of operations thread index
Math is. The syntax is arbitrary in some edge cases.
Such as?