Fundamentals 7 min read

Why Summing All Floats in [-1, 1] Doesn’t Produce Zero – A Brute‑Force Study

This article explores the surprising result that adding every IEEE‑754 floating‑point number between –1 and 1, even with exhaustive brute‑force enumeration in Rust, yields a non‑zero sum due to representation limits, rounding errors, and accumulation order effects.

IT Services Circle
IT Services Circle
IT Services Circle
Why Summing All Floats in [-1, 1] Doesn’t Produce Zero – A Brute‑Force Study

Background and Problem Statement

Mathematically, adding every real number in the interval [-1, 1] is undefined because the result depends on the order of summation, and it cannot be proven to be zero within standard analysis. The article asks whether the same intuition holds for floating‑point numbers.

Floating‑Point Representation

Floating‑point numbers follow the IEEE‑754 standard, offering 32‑bit single‑precision ( f32) and 64‑bit double‑precision ( f64) formats. Each value consists of a sign bit, exponent bits, and mantissa bits, representing only a finite subset of the real line.

Brute‑Force Enumeration of Single‑Precision Floats

Enumerating all 2³² possible 32‑bit patterns, interpreting each as an f32 value, and selecting those that lie in [-1.0, 1.0] yields about 2.13 billion numbers. The following Rust code (shown in the image) performs this scan and measures the time.

Rust code for enumerating f32 values
Rust code for enumerating f32 values
=== Brute Force: all u32 -> f32 ===
Total patterns:  4294967296
In [-1, 1]:      2130706434
Collect time:    6.041 seconds

Initial Summation Result

Accumulating the selected f32 values in a single‑precision accumulator yields 4194304 (2²²), far from zero. Using a double‑precision accumulator still leaves a residual error of 0.015625.

Why Order Matters

Because floating‑point addition is not associative, the order of operands influences the final sum. Simple forward iteration, sorting, or pairing smallest‑with‑largest values each produce different outcomes.

Alternative Strategies

Random Shuffle: Randomly permuting the numbers before summation reduces cancellation but introduces variability.

Kahan Summation: An algorithm that tracks lost low‑order bits with a compensation variable c to improve accuracy.

Kahan summation algorithm diagram
Kahan summation algorithm diagram

Observations

Floating‑point numbers are densely packed near zero; about half of the 4.3 billion f32 values lie within [-1, 1].

The sum’s sign and magnitude depend heavily on the accumulation order.

Even double‑precision accumulators retain noticeable error after billions of additions.

When the accumulator reaches around 2²⁴ = 16 777 216, adding numbers smaller than 1 no longer changes the result (“large‑number eats small‑number” effect).

Conclusion

The experiment demonstrates that, unlike the idealized real‑number case, summing all floating‑point numbers in a symmetric interval does not yield zero because of finite representation, rounding errors, and non‑associative addition. Careful algorithms such as Kahan summation are required to mitigate these effects.

Rustfloating-pointbrute forceIEEE-754Kahan-summationnumerical-errors
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.