I become suspicious when I see a Medium user posting well-written deep articles as frequently as this user appears to be doing. How can we tell whether this is AI slop or not?
Their articles aren’t that deep and they mostly focus on similar topics.
I think it’s perfectly possible for someone to have a backlog of work/experience that they are just now writing about.
If it were AI spam, I would expect many disparate topics at a depth slightly more than a typical blog post but clearly not expert. The user page shows the latter, but not the former.
However, the Rubik’s cube article does seem abnormal. The phrasing and superficiality makes it seem computer-generated, a real Rubik’s afficionado would have spent some time on how they cube.
Of course I say this as someone much more into mathematics than “normal” software engineering. So maybe their writing on those topics is abnormal.
You just know they will either take an oath to defend the Tangerine Torquemada or lose their command.
I stopped using floats 30 years ago when I learned what rounding errors can do if you only deal with big enough numbers of items to tally. My employer turned around 25M a year, and it had to add up to the cent for the audits.
There’s a good documentary about this.
Fun fact: This is actually called the Salami Shaving Scam. Basically, shave off tiny pieces of a bunch of large chunks, and eventually you’ll have a massive amount. Like taking a single slice of salami from every sausage that is sold.
And KSP (rocket exploding game) had ten years worth of floating point errors.
Like Minecraft has, too. Just go on a long, long walk in one direction.
What happens?
All kinds of weird things. There is a video explaining the details, and you’ve got to be far, far out.
Used to*, it was fixed in some version or another, where the procgen no longer evaluated how far you were from the origin
OK, I have not played it for AGES. Nice to see something like that fixed, as it was a bit system-inherent.
I’ll have to look it up after work. Sounds interesting.
The physics starts to glitch out, or at least used to, until it got upgraded to doubles. I also use doubles for my game engine, as well as (optionally) limiting pixel-precise things within
int.max
andint.min
.Does the world repeat after a set point?
Technically yes, and with tile layers, you can even set them repeating on a shorter area.
Single floats sure, but doubles give plenty of accuracy unless you absolutely need zero error.
For example geting 1000 random 12 digit ints, multiplying them by 1e9 as floats, doing pairwise differences between them and summing the answers and dividing by 1e9 to get back to the ints gives a cumulative error of 1 in 10^16. assuming your original value was in dollars thats roughly 0.001cent in a billion dollar total error. That’s going deliberately out of the way to make transactions as perverse as possible.
Nope. With about a hundred thousand factored items, things easily run off the rails. I’ve seen it. Just count cents, and see that rounding errors are kept in close, deterministic confines.
You can use Kahan summation to mitigate floating point errors. A mere 100 thousand floating point operations is a non-issue.
As a heads up computational physics and mathematics tackle problems trillions of times larger than any financial computation, that’s were tons of algorithms have been developed to handle floating point errors. Infact essentially any large scale computation specifically accounts for it.
Yep. And in accounting this is done with integers. In my field (not accounting), calculations are done either in integer or in fixed-point arithmetic - which is basically the same in the end. Other fields work with floats. This variety exists because every field has its own needs and preferences. Forcing “One size fits all” solutions was never a good idea, especially when certain areas have well-defined requirements and standards.
Yeah, but compared to counting money, nobody cares if some physics paper got its numbers wrong. :-)
(Not to mention that would require the paper to have reproducible artifacts first.)
Physics modeling is arguably the most important task of computers. That was the original impetus for building them; artillery calculations in WW2.
All engineering modeling uses physics modeling, almost always linear algebra (which involves large summations). Nuclear medicine—physics, weather forecasting—physics, molecular dynamics and computational chemistry—physics.
Physics modeling is the backbone of modern technology, it’s why so much research has been done on doing it efficiently and accurately.
We’re using general relativity to calculate sattelite orbits - fuck your point of sale system if our sattelites come crashing down we’re gonna have much bigger problems lol.
You are underestimating how precice doubles are. Summing up one million doubles randomly selected from 0 to one trillion only gives a cumulative rounding error of ~60, that coud be one million transactions with 0-one billion dollars with 0.1 cent resolution and ending up off by a total of 6 cents. Actually it would be better than that as you could scale it to something like thousands or millions of dollars to keep you number ranger closer to 1.
Sure if you are doing very high volumes you probably dont want to do it, but for a lot of simple cases doubles are completely fine.
Edit: yeah using the same million random numbers but dividing them all by 1000 before summing (so working in kilodollars rather than dollars) gave perfect accuracy, no rounding errors at all after one million 1e-3 to 1e9 double additions.
The issue is different. Imagine you have ten dollars that you have to spread over three accounts. So this would be 3.33 for each, absolute correctly rounded down. And still, a cent is missing in the sum. At this point, it is way easier to work with integers to spread leftovers - or curb overshots.
That doesn’t make any sense. As you say, in that case you have to “spread leftovers”, but that isn’t really any more difficult with floats than integers.
It’s better to use integers, sure. But you’re waaaay over-blowing the downsides of floats here. For 99% of uses
f64
will be perfectly fine. Obviously don’t run a stock exchange with them, but think about something like a shopping cart calculation or a personal finance app. Floats would be perfectly fine there.As someone who has implemented shopping carts, invoicing solutions and banking transactions I can assure you floats will be extremely painful for you.
A huge benefit of big decimals is that they don’t allow you to make a mistake (as easily) as floats where imprecision just “creeps in”.
As you said, better use integers. And that’s exactly what is done at this point.
Indeed, but there’s no need to shit on people using floats because in almost all cases they are fine too.
I fail to see a difference there, 10.0/3 = 3.33333333333 which you round down to 3.33 (or whatever fraction of a cent you are using) as you say for all accounts then have to deal with the leftovers, if you are using a fixed decimal as the article sugests you get the same issue, if you are using integer fractions of a cent, say milicents you get 1000000/3 = 333333 which gives you the exact same rounding error.
This isnt a problem with the representation of numbers its trying to split a quantity into unequal parts using division. (And it should be noted the double is giving the most accurate representation of 10/3 dollars here, and so would be most accurate if this operation was in the middle of a series of calcuations rather than about to be immediately moving money).
As I said before, doubles probably arent the best way to handle money if you are dealing with high volumes of or complex transactions, but they are not the waiting disaster that single floats are and using a double representation then converting to whole cents when you need to actually move real money (like a sale) is fine.
I fail to see a difference there
That I noticed some posts ago. The issue has not changed since then.
And so instead of explain why and clarify any misunderstanding you chose to snarkily insult my intelligence, very mature.
Stop Using Floats
no shit
or Cents
huh…?
That was a good point.
I think maybe they meant using integers for cents
I think using millicents is pretty standard in fin-tech.
I got hung up on this line:
This requires deterministic math with explicit rounding modes and precision, not the platform-dependent behavior you get with floats.
Aren’t floats mostly standardized these days? The article even mentions that standard. Has anyone here seen platform-dependent float behaviour?
Not that this affects the article’s main point, which is perfectly reasonable.
Floating-Point Determinism | Random ASCII - tech blog of Bruce Dawson https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/
The short answer to your questions is no, but if you’re careful you can prevent indeterminism. I’ve personally ran into it encoding audio files using the Opus codec on AMD vs Intel processors (slightly different binary outputs for the exact same inputs). But if you’re able to control your dev environment from platform choice all the way down to the assembly instructions being used, you can prevent it.
Thanks, that’s an excellent article, and it’s exactly what I was looking for.
The IEEE standard actually does not dictate a rounding policy
Mostly standardized? Maybe. What I know is that float summation is not associative, which means that things that are supposed to be equal (x + y + z = y + z + x) are not necessarily that for floats.
The real standard is whatever Katherine in accounting got out of the Excel nightmare sheets they reconcile against.
If you count the programming language you use as ‘platform’, then yes. Python rounds both 11.5 and 12.5 to 12.
That is default IEEE behaviour: https://en.wikipedia.org/wiki/Rounding#Rounding_half_to_even
This is the default rounding mode used in IEEE 754 operations for results in binary floating-point formats.
Though it’s definitely a bad default because it’s so surprising. Javascript and Rust do not do this.
Not really anything to do with determinism though.
This is a common rounding strategy because it doesn’t consistently overestimate like the grade school rounding strategy of always rounding up does.
deleted by creator