One person suggested this talk should be “don’t use JS for maths…”
The most reported bug in js: 0.1 + 0.2 != 0.3 (eg. 0.30000000000004) …so what is actually going on here?
A brief history of maths… we have had different number systems through history; and new understandings and rules on how to use them.
Humans like to count in 10s. Nobody really knows why but it’s probably to do with having ten fingers. Computers prefer 2s (powers of two underpin binary).
IEEE-754: a not-free standard that defines formats for numbers, interactions, and so on. It’s used in lots of modern programming languages and devices. It is a little bit like scientific notation. 5.6 x 10(power of -9)
There is an IEEE-754 calculator online which helps understand this system. So why do we care?
All numbers in JS are 64 bit IEEE-754 floating point values.
|0.1||0.100000000014351 (not the real number used in presentation)|
Some numbers don’t store quite right. Think of it like trying to produce 43c in Australian currency – you simply cannot do it, because we don’t have the coins to add up to that value (no 1c or 2c).
So we get some calculation problems.
Integers are always stored exactly, but people tend not to work purely in Ints because it’s a little clunky. Much better to use a library:
Issues with this? Speed. The libraries are not as fast as native. Also you can’t generally do things you’d expect like x+y, you have to things like sum(x,y) or similar syntax.
Comparing numbers can be made more predictable by including a margin of error (noting small differences usually noted with the epsilon symbol, so expect to see that in examples.)
But what about the future? These things aren’t great… so what might happen:
Other issues not even discussed…