(Python) Float equality
The Gotcha
Floating-point numbers are not stored with perfect precision. This affects equality comparisons between floats that could reasonably be expected to have the same value:
>>> (0.1 + 0.1 + 0.1) == 0.3
False
>>> 0.1 + 0.1 + 0.1
0.30000000000000004
Why it Happens
This isn't a Python issue or even a software issue. Most CPUs – i.e. the hardware layer – use the IEEE 754 standard to implement floating-point numbers. Because floats are stored as binary numbers, fractional values that have recurring digits in their binary representation must be rounded up or down. For example, while the decimal 1/2 becomes 0.1 in binary and can be stored precisely, the decimal 1/3 becomes 0.10101... in binary and must be rounded. Performing arithmetic operations among these floats can then accumulate the rounding errors, resulting in "incorrect" results when doing equality checks between floating-point numbers.
Even more interesting, the IEEE 754 standard has limitations for large numbers as well. Numbers beyond 253 are rounded down to the nearest multiple of 2. And beyond 254, they're rounded to the nearest multiple of 4:
>>> a = float(2**53)
>>> a == a + 1
True
>>> b = float(2**54)
>>> b == b + 2
True
The Fix
Solution 1: Define a custom equality function
Instead of checking for equality, check that the floating-point numbers are close enough so that rounding errors become irrelevant:
EPSILON = 1e-6
def is_close(a, b):
return abs(a - b) < EPSILON
a = 0.1 + 0.1 + 0.1
b = 0.3
print(is_close(a, b)) # True
Solution 2: Use isclose
This strategy is the same as the one above, but this solution uses the math
library's implementation:
import math
a = 0.1 + 0.1 + 0.1
b = 0.3
print(math.isclose(a, b)) # True
The isclose
function also accepts an optional rel_tol
or abs_tol
argument to customize the tolerance threshold for rounding errors. The default is a relative tolerance of 1e-09.
Solution 3: Use Decimal
types
Instead of working with floats, use Decimal
to preserve precision:
from decimal import Decimal
a = Decimal("0.1") + Decimal("0.1") + Decimal("0.1")
b = Decimal("0.3")
print(a == b) # True
Note that Decimal
arithmetic is slower than float arithmetic. Also, remember that Decimal
numbers are meant to resolve rounding errors in binary arithmetic but not in decimal arithmetic. For additional precision with base 10 numbers, use Fraction
:
from decimal import Decimal
from fractions import Fraction
a = Decimal("1") / Decimal("3") * Decimal("3")
b = Decimal("1")
print(a == b) # False
print(a) # Decimal('0.9999999999999999999999999999')
c = Fraction("1") / Fraction("3") * Fraction("3")
d = Fraction("1")
print(c == d) # True