Introduction
Hello folks! I’m back with another blog. This time we’re going to talk about how JS stores numbers in memory .
JS is a very quirky language, a lot of things JS has or does have been borrowed from existing languages or
practices, recently I have been reading the book How JS works?
by Douglas Crockford
, and he is very opinionated about JS.
According to him JS has its quirks but there are good parts to it, for example he loves the fact that JS has only one number type and not multiple number types like JAVA which leads to a lot of bugs in JAVA, because we get lost in type conversion.
Have you ever thought why ?
const add = 0.1 + 0.2
console.log(add) // 0.30000000000000004
console.log(0.3 === 0.1 + 0.2) // returns false
we will address both of these operations in the end..
Here are few facts about how JS stores numbers
- JS uses 64 bit double precision floating point numbers. (if this feels verbose, i will explain it to you exactly what it means)
- JS number is a subset of JAVA’s double which is a subset of IEEE 754, a standard for storing floating point numbers.
Let’s get an overview of decimal and binary number system
Humans always have represented numbers in the decimal system, and we are also very comfortable with it. Let us see how we store decimal values in power of 10.
decimal in power of 10
// Using scientific notation (same thing, shorter syntax)
1e3 // 1000 (10 ** 3)
1e-3 // 0.001 (10 ** -3)
// As power of 10
1.56e2 // means 1.56 × 10^2 = 156
similar to how in our decimal system we represent number in power of 10.
For computers we first convert these numbers to binary and then store them in power of 2, because in binary we only have two numbers 0 and 1.
decimal to binary in power of 2
// decimal to binary representation
10011110 // 156
10011110.01 // 156.25
// scientific notation (power of 2)
1.001111001 × 2^7 // 156.25
Understanding 64 bit double precision floating point numbers
64-bit layout of a JavaScript number (IEEE-754 double precision)
It’s called double precision because the number uses 64 bits instead of 32. Single precision (32 bits) has 23 mantissa bits, while double precision has 52 mantissa bits, giving 29 extra bits. The extra bits allow for a more accurate representation of decimal numbers.
| Sign (1) | Exponent (11) | Mantissa (52) |
-
1 bit → Sign
0
→ positive1
→ negative
-
11 bits → Exponent
- Stored with a bias of 1023
- Raw range:
0 … 2047
- Effective range (after removing bias):
−1022 … +1023
-
52 bits → Fraction (Mantissa)
- Stores the fractional part
- For normalized numbers, an implicit leading
1
is added in front - Together, this makes the Significand = 1.mantissa (53 bits of precision)
below are the formuale which are standardised in the IEE 754 standard for calculating bias and storedExponent.
// forumala for Bias calculation
bias = 2^(k - 1) - 1 // 2^(11 -1)-1 = 1023
// k = number of exponent bits
// Storing an exponent
storedExponent = actualExponent + bias
// Retrieving the actual exponent
actualExponent = storedExponent - bias
// Examples:
// Smallest normal exponent
storedExponent = 1
actualExponent = 1 - 1023 = -1022
// Largest normal exponent
storedExponent = 2046
actualExponent = 2046 - 1023 = +1023
Taking an example for 156.25, how is it stored in memory?
// Let's store 156.25 as a 64-bit double precision number
// 1. Sign bit: The number is positive.
const sign = 0;
// 2. Binary representation: 156.25 is 10011100.01 in binary.
const binaryRepresentation = "10011100.01";
// 3. Normalize to scientific notation (base 2): Move the binary point 7 places to the left.
const normalizedForm = "1.001110001 × 2^7";
const actualExponent = 7;
// 4. Stored Exponent: Add the bias (1023) to the actual exponent.
const storedExponent = 7 + 1023; // 1030
const storedExponentBinary = storedExponent.toString(2).padStart(11, '0'); // "10000000110"
// 5. Mantissa: Take the fractional part of the normalized form and pad it to 52 bits.
// The implicit '1.' is not stored.
const mantissa = "0011100010000000000000000000000000000000000000000000";
// 6. Final 64-bit Representation
const finalRepresentation = `${sign} ${storedExponentBinary} ${mantissa}`;
console.log(finalRepresentation);
// Output: 0 10000000110 0011100010000000000000000000000000000000000000000000
Now our final question, why 0.1 + 0.2
is not equal to 0.3
?
// The final 64-bit binary for 0.3 (literal)
// 0 10000011101 0011001100110011001100110011001100110011001100110011
// The final 64-bit binary for the sum of 0.1 + 0.2
// 0 10000011101 0011001100110011001100110011001100110011001100110100
Notice the last few bits in the mantissa are different. This small difference is what causes the comparison 0.1 + 0.2 === 0.3 to return false