Consistent decimal semantics

Sam Ruby rubys at
Mon Aug 25 17:53:38 PDT 2008

Waldemar Horwat wrote:
> We're going round in circles on a few issues surrounding Decimal.
> Some of these have clear resolutions; others have a little wiggle
> room.  Here's my summary:

I don't believe that we are going around in circles.  I've got a 
tangible implementation that is tracking to the agreements.  I published 
a list of results recently[1].  I've also published my code if you care 
to check it out[2].

> - Should decimal values behave as objects (pure library
> implementation) or as primitives?
> If they behave as objects, then we'd get into situations such as 3m
> != 3m in some cases and 3m == 3m in other cases.  Also, -0m != 0m
> would be necessary.  This is clearly unworkable.

That would be unworkable, if you can suggest a case where this would be 
true.  A tangible scenario would help.

> - Should == and === distinguish cohort members?
> The only reasonable answer is no, and is consistent with existing
> treatment of -0 and +0.  Otherwise you'd break integer arithmetic:
> 1e12m - 1e12m === 0m would be false.

While I disagree with your reasoning, I agree with your conclusion.

Given that IEEE defines a total ordering, and === is purportedly 
"strict" it would neither be unreasonable nor broken to define strictly 
equal in such a manner.

That being said, ES "strict" equals is not strict today, and there are 
other usability and theoretical and compatibility criteria involved; not 
all of them pointing in the same direction.

I'm comfortable with a definition of === where 1e12m - 1e12m === 0m is true.

> - What should cross-type == do on 5 == 5m?
> Here there are a couple sensible choices:  It could treat all decimal
> values as different from all Number values, or it could convert both
> values to a common type (decimal because it's wider) and compare.

Decimal is wider, but while every number expressible as a binary64 will 
map to a unique decimal128 value and will round-trip back to the same 
binary64 number[3], it is not the case that the numbers that a binary64 
number maps in decimal128 to map to exactly the same point on the real 
number line as the original binary64 number[4].

> My inclination would be the latter, which would make 5 == 5m true.
> Note that, unless we choose option 2b below, 5.1 == 5.1m would be
> false because 5.1 is equal to 5.099999999999999644728632119949907m.

Agree that 5 == 5m is true, and that 5.1 == 5.1m is false.

> - What should cross-type === do on 5 === 5m?
> These objects are of different types, so it should return false.


> - How should mixed type arithmetic work in general?
> There are a few consistent design alternatives:
> 1.  Always disallow it.  For consistency you'd have to disallow ==
> between Number and decimal as well. 2.  Allow it, converting to the
> wider type (Decimal128).  There are a couple design choices when
> doing the conversion: 2a.  Convert per the IEEE P754 spec:  5.1 turns
> into 5.099999999999999644728632119949907m.  This is how most
> programming languages operate (C++, Java, Lisp, etc.) when converting
> among the built-in floating point values (float -> double etc.). 2b.
> Convert the Number to a string and then to a Decimal:  5.1 turns into
> 5.1m.  As a special case, -0 would turn into -0m.  This might work,
> but I haven't thought through the implications of this one. 3.  Allow
> it, converting to the narrower type (double).  This would break
> transitivity:  5.00000000000000001m != 5m, but they're both == to 5,
> so it's a non-starter.

I think that 1 has negative usability characteristics that would rule it 

2a is how I have currently implemented this function.

While 2b works for some values, it quickly degrades.  What should 
1.2-1.1+5m produce?  Being consistent is probably better, as it will 
help people who care to do so to track down cases where mixed mode 
operations that they didn't intend to be mixed mode occur.  Finally, 
having a toDecimal(n) that mirrors the existing toFixed(n) function 
would provide the ability for people to contain the cascading of 
unnecessary precision.

> - Should trailing zeroes be printed when doing toString on decimal
> values?
> No.  If you print while distinguishing cohort members then you'll be
> required to print -0m as "-0" (which we don't do for Numbers), 1m/.1m
> as "1e1", and will get nasty results when using decimal numbers as
> array indices (barring introducing yet further complications into the
> language).  Furthermore, were you to do an implementation with a "big
> red switch" that turns all numbers into decimal, everything would
> break because toString would print excess digits when printing even
> simple, unrounded values such as 1.00.

You are mixing a number of different topics.

You and I simply disagree with your No when it comes to trailing zeros.

> - Should + concatenate decimal numbers?
> No.

Agreed.  Note that + should continue to act as a concatenation if the 
other operand is a string or a non-decimal object.

Furthermore 3.1m+true should be 4.1m.

> - How many double NaNs should we have?
> Edition 3 says there's only one, but IEEE P754's totalOrder now
> distinguishes between NaN and -NaN as well as different NaNs
> depending on how they got created.  Depending on the implementation,
> this is a potential compatibility problem and an undesirable way for
> implementations to diverge.

There are multiple layers to this.

At the physical layer, even binary64 provides the ability to have 2**52 
different positive NaNs, and an equal number of negative NaNs.

At a conceptual layer, ES is described as having only having one NaN. 
This layer is of the least consequence, and most easily changed.

At an operational layer, the operations defined in ES4 are (largely?) 
unable to detect these differences.  For backwards compatibility 
reasons, it would be undesirable to change any of these existing 
interfaces, unless there were a really, really, really compelling reason 
to do so.

Overall, as long as we don't violate the constraints presented by the 
physical and existing operational layers, we may be able to introduce 
new interfaces (such as Object.identity) that is able to distinguish 
things that were not previously distinguishable.

> - How many decimal NaNs should we have?
> Presumably as many as we have double NaNs....

Actually, there are about 10**46-2**52 more.

> Waldemar

- Sam Ruby


More information about the Es-discuss mailing list