Why 0.1 + 0.2 ≠ 0.3 in JavaScript? Understanding Number Precision and Fixes
This article explains why JavaScript’s Number type suffers from floating‑point precision errors, describes the IEEE‑754 double‑precision storage format, clarifies key Number properties, shows why 0.1 + 0.2 is not exactly 0.3, and provides practical code‑based solutions.
Most JavaScript developers have encountered floating‑point precision errors, for example
console.log(0.1+0.2===0.3) // false. In JavaScript all numbers, both integers and decimals, are represented by the
Numbertype, which follows the IEEE‑754 64‑bit double‑precision format.
Number storage standard
JavaScript
Numberuses the IEEE‑754 64‑bit double‑precision floating‑point representation. The 64 bits are divided into three fields:
sign: 1 bit
exponent: 11 bits
fraction (mantissa): 52 bits
The exponent range is 0‑2047. When the exponent is 0 or 2047, special meanings apply depending on whether the fraction is zero (see the table). For normal numbers the exponent is stored with a bias of 1023, and the leading mantissa bit is implicit 1. The value of a normal number is:
For subnormal numbers the leading bits are 0 instead of 1 and the bias is 1022, giving the value:
Key Number properties
Understanding the storage format clarifies several important
Numberproperties:
Number.MAX_VALUE – the largest representable number.
Number.MIN_VALUE – the smallest positive subnormal number.
Number.EPSILON – the difference between 1 and the smallest number greater than 1.
Number.MAX_SAFE_INTEGER – the largest integer that can be represented exactly (2^53‑1). Example:
Math.pow(2,54)===Math.pow(2,54)+1 // trueNumber.MIN_SAFE_INTEGER – the negative of
Number.MAX_SAFE_INTEGER(‑9007199254740991).
Why 0.1 + 0.2 ≠ 0.3
0.1 in binary is 0.0001100110011… (repeating). After rounding to 52 fraction bits the stored value becomes slightly larger. The same occurs for 0.2 and 0.3. Adding the stored binary values of 0.1 and 0.2 yields a binary fraction that rounds to 0.30000000000000004, not exactly 0.3. The following code shows the binary strings and their lengths.
<code>var a = 0.1;
console.log(a.toString(2)); // 0.0001100110011001100110011001100110011001100110011001101
var b = 0.2;
console.log(b.toString(2)); // 0.001100110011001100110011001100110011001100110011001101
var c = 0.3;
console.log(c.toString(2)); // 0.010011001100110011001100110011001100110011001100110011
var d = 0.1 + 0.2;
console.log(d.toString(2)); // 0.0100110011001100110011001100110011001100110011001101
console.log(d.toString(2).length); // 54
</code>The resulting binary sum is rounded to 52 fraction bits, producing the final value 0.01001100…110100, which differs from the binary representation of 0.3.
Floating‑point precision solutions
Depending on the use case, you can mitigate precision loss in several ways:
For display only, use
Number.prototype.toFixedtogether with
parseFloat.
<code>function formatNum(num, fixed = 10) {
return parseFloat(num.toFixed(fixed));
}
var a = 0.1 + 0.2;
console.log(formatNum(a)); // 0.3
</code>For arithmetic, convert operands to integers, perform integer operations, then scale back. Example addition function:
<code>function add(num1, num2) {
var decimalLen1 = (num1.toString().split('.')[1] || '').length;
var decimalLen2 = (num2.toString().split('.')[1] || '').length;
var baseNum = Math.pow(10, Math.max(decimalLen1, decimalLen2));
return (num1 * baseNum + num2 * baseNum) / baseNum;
}
console.log(add(0.1, 0.2)); // 0.3
</code>Reference material:
https://en.wikipedia.org/wiki/IEEE_754
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
https://en.wikipedia.org/wiki/Normal_number_(computing)
https://en.wikipedia.org/wiki/Denormal_number
QQ Music Frontend Team
QQ Music Web Frontend Team
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.