The following expression evaluates to
false in C#:
(1 + 1 + 0.85) / 3 <= 0.95
And I suppose it does so in most other programming languages which implement IEEE 754, since
(1 + 1 + 0.85) / 3 evaluates to
0.95000000000000007, which is greater than
However, even though Excel should implement most of IEEE 754 too, the following evaluates to TRUE in Excel 2013:
= ((1 + 1 + 0.85) / 3 <= 0.95)
Is there any specific reason for that? The article linked above does not mention any custom implementations of Excel that can lead to this behavior. Can you tell Excel to strictly round according to IEEE 754?
Please note that even though most Excel questions should be asked on superuser.com, this question deals with floating-point arithmetic, which is a common problem in programming languages. From the viewpoint of this question’s topic, Excel is a programming language like C# or Java.
The article that you linked to is explicit about doing something nonstandard with values near 0:
Example when a value reaches zero
1.In Excel 95 or earlier, enter the following into a new workbook: A1: =1.333+1.225-1.333-1.225
2.Right-click cell A1, and then click Format Cells. On the Number tab, click Scientific under Category. Set the Decimal places to 15.
Instead of displaying 0, Excel 95 displays -2.22044604925031E-16.
Excel 97, however, introduced an optimization that attempts to correct
for this problem. Should an addition or subtraction operation result
in a value at or very close to zero, Excel 97 and later will
compensate for any error introduced as a result of converting an
operand to and from binary.
The unspecified “optimization that attempts to correct for this problem” does mean that caution should be used in using Excel for numerical computations when strict agreement with IEEE 754 is required. Perhaps using VBA (which is unlikely to have this “optimization”?) might be a workaround.