Thoughts on IEEE P754

Waldemar Horwat waldemar at google.com
Fri Aug 22 13:18:34 PDT 2008


Sam Ruby wrote:
> On Fri, Aug 22, 2008 at 3:55 PM, Waldemar Horwat <waldemar at google.com> wrote:
>> Sam Ruby wrote:
>>> On Fri, Aug 22, 2008 at 2:28 PM, Waldemar Horwat <waldemar at google.com> wrote:
>>>> Sam Ruby wrote:
>>>>> Waldemar Horwat wrote:
>>>>>> Some tidbits about our embedding of decimal:
>>>>>>
>>>>>> - Contagion should be towards decimal if decimal and binary are mixed
>>>>>> as operands.  5.3m + 1 should be 6.3m, not 6.3.  If we use 128-bit
>>>>>> decimal, this also makes the behavior of heterogeneous comparisons
>>>>>> (binary compared to decimal) sensible.
>>>>> What should 5.3m + 1.0000000000000001 produce?
>>>>>
>>>>> I also don't understand the heterogeneous comparisons comment.  What
>>>>> should 1.0000000000000001 == 1.0000000000000001m produce?
>>>> Depends.  There are many decimal formats.  Which decimal format and representation are you specifying?
>>> Decimal128.  But the key to this question is the fact that the
>>> binary64 floating point constant is indistinguishable from the value
>>> 1.
>> If you're using Decimal128, then 1.0000000000000001 == 1.0000000000000001m is false because the double on the left evaluates to 1.  If you're using Decimal64, then 1.0000000000000001 == 1.0000000000000001m is true because both sides evaluate to their respective versions of 1.
> 
> And this makes heterogeneous comparisons sensible?
> 
> I'm suggesting that 1.0000000000000001 is "about 1" and any operator,
> be it addition, subtraction, multiplication, division, or even double
> equals, first "downconverts" values with additional precision before
> applying the operation.
> 
> A few examples:
> 
> false: 1.2 - 1.1 == 0.0
> false: 1.2m  - 1.1 == 0.0
> false: 1.2 - 1.1m == 0.0
> false: 1.2m - 1.1m == 0.0
> false: 1.2 - 1.1 == 0.0m
> false: 1.2m  - 1.1 == 0.0m
> false: 1.2 - 1.1m == 0.0m
> true: 1.2m - 1.1m == 0.0m

The last one should, of course, be false, or are you saying that 0.1m == 0.0m?

>> Here's a question for you:
>>
>> What should the full result of converting 1.1 to a Decimal be in ECMAScript?  Describe the precise answer and justify your choice of answer.  Hint:  This would behave remarkably differently depending on whether we settle on Decimal64 or Decimal128.
> 
> At the present time, I am only suggesting that ECMAScript support
> Decimal128, and that there not by *any* implicit conversions to
> Decimal.
> 
> Decimal.parse(n) would take n.ToString() and then parse that as a
> Decimal128 value.  If (in the future) additional precisions were
> allowed by the standard, then there would be a second parameter on
> Decimal.parse that would allow the precision to be specified.
> Meanwhile:
> 
> true: Decimal.parse(1.1) == 1.1m

An interesting choice.  This produces more sensible results but directly violates IEEE P754 section 5.3.3.

If we're using Decimal64, then there are no major issues.  Converting 1.1 do decimal would produce 1.1m, as both you and I desire.

If we're using Decimal128, then IEEE P754 mandates that binary floats be convertible to decimal floats and that the result of the conversion of 1.1 to decimal be 1.100000000000000088817841970012523m.

Hence, a dilemma if we choose Decimal128.  Do we obey the standard?

    Waldemar


More information about the Es-discuss mailing list