Thoughts on IEEE P754

Sam Ruby rubys at intertwingly.net
Fri Aug 22 13:06:13 PDT 2008


On Fri, Aug 22, 2008 at 3:55 PM, Waldemar Horwat <waldemar at google.com> wrote:
> Sam Ruby wrote:
>> On Fri, Aug 22, 2008 at 2:28 PM, Waldemar Horwat <waldemar at google.com> wrote:
>>> Sam Ruby wrote:
>>>> Waldemar Horwat wrote:
>>>>> Some tidbits about our embedding of decimal:
>>>>>
>>>>> - Contagion should be towards decimal if decimal and binary are mixed
>>>>> as operands.  5.3m + 1 should be 6.3m, not 6.3.  If we use 128-bit
>>>>> decimal, this also makes the behavior of heterogeneous comparisons
>>>>> (binary compared to decimal) sensible.
>>>> What should 5.3m + 1.0000000000000001 produce?
>>>>
>>>> I also don't understand the heterogeneous comparisons comment.  What
>>>> should 1.0000000000000001 == 1.0000000000000001m produce?
>>> Depends.  There are many decimal formats.  Which decimal format and representation are you specifying?
>>
>> Decimal128.  But the key to this question is the fact that the
>> binary64 floating point constant is indistinguishable from the value
>> 1.
>
> If you're using Decimal128, then 1.0000000000000001 == 1.0000000000000001m is false because the double on the left evaluates to 1.  If you're using Decimal64, then 1.0000000000000001 == 1.0000000000000001m is true because both sides evaluate to their respective versions of 1.

And this makes heterogeneous comparisons sensible?

I'm suggesting that 1.0000000000000001 is "about 1" and any operator,
be it addition, subtraction, multiplication, division, or even double
equals, first "downconverts" values with additional precision before
applying the operation.

A few examples:

false: 1.2 - 1.1 == 0.0
false: 1.2m  - 1.1 == 0.0
false: 1.2 - 1.1m == 0.0
false: 1.2m - 1.1m == 0.0
false: 1.2 - 1.1 == 0.0m
false: 1.2m  - 1.1 == 0.0m
false: 1.2 - 1.1m == 0.0m
true: 1.2m - 1.1m == 0.0m

false

> Here's a question for you:
>
> What should the full result of converting 1.1 to a Decimal be in ECMAScript?  Describe the precise answer and justify your choice of answer.  Hint:  This would behave remarkably differently depending on whether we settle on Decimal64 or Decimal128.

At the present time, I am only suggesting that ECMAScript support
Decimal128, and that there not by *any* implicit conversions to
Decimal.

Decimal.parse(n) would take n.ToString() and then parse that as a
Decimal128 value.  If (in the future) additional precisions were
allowed by the standard, then there would be a second parameter on
Decimal.parse that would allow the precision to be specified.
Meanwhile:

true: Decimal.parse(1.1) == 1.1m

>    Waldemar

- Sam Ruby


More information about the Es-discuss mailing list