console.log(0.2 + 0.1) // 0.30000000000000004
But, after doing a bit of research, I figured out that this was not an error. This is math and the way computers deal with numbers.
Well, there are some other weird stuff too -
What’s happening behind the scene?
The simple logic behind it is computers use a base-2 (binary) floating numbers system.
Let’s understand it in detail with a very simple analogy.
We humans use the base-10 (decimal) number system to read, write and understand numbers in daily life.
When someone tells us 1/10, we understand and take it as 0.1 but for 1/3, we don’t have an exact value. It’s 0.333333….. (never ending) so for our understanding, we take it as 0.3 (nearest possible complete number).
let a = 0.1;
In binary -
In same way, 0.2 is interpreted as -
So when we do 0.1 + 0.2,
0.1000000000000000055511151231257827021181583404541015625 + 0.200000000000000011102230246251565404236316680908203125 = 0.3000000000000000166533453693773481063544750213623046875
Now, we need to understand what happens when we console log this value.
So, for example we log the value -
console.log(0.1000000000000000055511151231257827021181583404541015625) // 0.1
Output is 0.1
That’s why, we try to log the result of 0.1 + 0.2,
as we have already concluded the result which is -
So, this is why the answer of 0.1 + 0.2 is 0.30000000000000004
I hope, now you have got a fair idea of what to say when someone asks why 0.1 + 0.2 is not 0.3
Interested in reading more such articles from Pankaj Tanwar?
Support the author by donating an amount of your choice.
Your article is great, and it explains in details what happens under the surface
This is a classic example of a bad language with lots of accidental complexities.
Programmers need to address only about essential complexities (those imposed by the real domain)
I will use your article as an inspiration for yet another code smell.
Thank you very much