Es-discuss - several decimal discussions
MFC at uk.ibm.com
Sat Aug 23 01:49:26 PDT 2008
Some comments in reply to several of the comments and questions from
overnight, from various writers...
> > At the present time, I am only suggesting that ECMAScript support
> > Decimal128, and that there not by *any* implicit conversions to
> > Decimal.
> > Decimal.parse(n) would take n.ToString() and then parse that as a
> > Decimal128 value. If (in the future) additional precisions were
> > allowed by the standard, then there would be a second parameter on
> > Decimal.parse that would allow the precision to be specified.
> > Meanwhile:
> > true: Decimal.parse(1.1) == 1.1m
> An interesting choice. This produces more sensible results but
> directly violates IEEE P754 section 5.3.3.
(It's 5.4.2.) That section, indeed, requires correctly-rounded
conversions be available.
There was much debate about that, because existing implementations of
binary -> character sequences rarely produce correctly rounded
results -- one of the reasons why users are so often confused by
binary floating point (converting 0.1 to a double and then printing
it commonly shows "0.1" -- it might have been better if it produced
(say) "0.10" to indicate the inexactness, much as "3.0" notation is
used to indicate float rather than integer).
However, there's nothing in 754 that says *all* conversions must act
that way, just that there be a conversion that is correctly rounded.
That's a language designer's choice. (More on this below.)
[Aside: IEEE now says, for binary -> string conversion, that the
result should be exact unless rounding is necessary (due to
specifying a smaller number of digits than would be needed for an
exact conversion). So ES really should provide a binary -> string
conversion that does this (and preserves the sign!).]
> If we're using Decimal64, then there are no major issues.
> Converting 1.1 do decimal would produce 1.1m, as both you and I desire.
That conversion would be inexact, so I'd hope you'd get
1.100000000000000m not 1.1m. This is a case where the
preserved exponent tells you something about the past history
(another is that the exponent on a zero after an underflow is the
most negative possible exponent, not 0).
> If we're using Decimal128, then IEEE P754 mandates that binary
> floats be convertible to decimal floats and that the result of the
> conversion of 1.1 to decimal be 1.100000000000000088817841970012523m.
> Hence, a dilemma if we choose Decimal128.
The binary64 value is not 1.1 -- and knowing that it is not is surely
a good thing, rather than trying to 'cover it up' by rounding.
> > If we're using Decimal128, then IEEE P754 mandates that binary
> > floats be convertible to decimal floats and that the result of the
> >conversion of 1.1 to decimal be 1.100000000000000088817841970012523m.
> > Hence, a dilemma if we choose Decimal128. Do we obey the standard?
> Deviations from the standard merit bug reports.
Agreed, if Decimal.parse were described as being the 754-conformant
conversion operation. However,in this case, ES could define
Decimal.parse as not being correctly rounded and provide another
method which gives you the correctly rounded conversion (when and if
ESn.n claims to be conformant to 754). But it might be better to
have Decimal.parse correctly round in the first place.
(The expert on binary FP <--> decimal FP conversions is Michel Hack,
btw -- this committee might want to ask his opinion; he wrote much of
5.12.2 in 754.)
> On Fri, Aug 22, 2008 at 11:39 AM, Waldemar Horwat <waldemar at google.com>
> > Sam Ruby wrote:
> >> When dealing with currency, having 0.05 + 0.05 produce 0.10 is a
> > Contrary to some beliefs, the different cohort members in IEEE
> > P754 are *not* a valid indicator of precision. To prove this,
> > assume that the different cohort members are an indicator of
> > precision. Then evaluate:
> > 3.00 + 1.00000
> > IEEE P754 returns 4.00000, while the result is actually precise to
> > only two digits after the decimal point. Hence a contradiction. QED.
I think this is looking at it backwards. Before the calculation, the
different exponents might well be a valid indicator of quantum of a
measurement (for example). After the calculation the result may
continue to be a valid indication of quantum (e.g., when adding up
currency values) and in certain special cases this is very useful.
But the result most certainly does not attempt to maintain
For example, 6.13 * 2.34 gives 14.3442, a six-digit exact result,
even though each of the inputs had only three significant digits.
The arithmetic is not significance arithmetic, and nor is it
claimed or intended to be. So there is no contradiction at all.
Consider, too, the slightly different calculation:
3.01 + 1.00001
The answer here is 4.01001. If this were some kind of 'significance'
or 'decimal precision' arithmetic then one might argue that the last
3 digits should be removed -- but they are not. All numbers going
into an operation are (and must be assumed to be) exact.
The calculation of the exponent is the same in the two cases. There
is no normalization step (avoiding this step, among other things,
helps performance -- when adding a column of currency numbers, for
example, often all the exponents will be the same (even though some
values will have trailing zeros), so there will be no need to align
the numbers before each addition and no need to normalize after.
> This example makes me realize how utterly confused and ignorant I am
> about the rationale for decimal.
The rationale for decimal is primarily the need to represent decimal
fractions exactly. That's at:
The discussion here is whether unnormalized arithmetic is a good
thing or not, which is a subsidiary point. Unnormalized decimal
arithmetic has the huge advantage that it mirrors basic arithmetic
(as taught in schools etc.), which preserves trailing zeros and
treats all values as exact until forced otherwise (e.g., division by
This is not 'significance arithmetic', and you really do not what
that in a language anyway.
[Aside: the value of normalization in binary floating-point is that
it permits one bit of the significand to be implied, so you get more
precision in a given encoding. It's a compression technique (which
has a negative impact in that it's harder and generally slower to
implement than unnormalized binary FP).]
> Finally, I'd like to take a poll: Other than people working on decimal
> at IBM and people on the EcmaScript committee, is there anyone on this
> list who thinks that decimal adds significant value to EcmaScript? If
> so, please speak up. Thanks.
With respect, I think you are asking the wrong set of people. The ES
committee is not (and nor should it be) a collection of ES users or a
collection of arithmetic experts. It's a committee of experts in
(ES) language design.
Decimal arithmetic is sufficiently important that it is already
available in all the 'Really Important' languages except ES
(including C, Java, Python, C#, COBOL, and many more). EcmaScript is
the 'odd one out' here, and not having decimal support makes it
terribly difficult to move commercial calculations to the browser for
'cloud computing' and the like.
> >>> "An error" or "you can't do it" is not a valid answer because it
> >>> violates IEEE P754.
> >> I'm clearly not following something.
> >> I'm suggesting that there not be any implicit conversions from
> >> binary64 to decimal128. I've made a suggestion as to what
> >> Decimal.parse should do. If there are no other means to do a
> >> conversion, that I can't answer the question about what such a
> >> non-existent method might do.
> >> If you suggest that we need to do implicit conversions, I am aware of
> >> a few different mechanisms to do so. I think that the one that
> >> produces the most expected results would be based on
> >> decimal128-convertFromDecimalCharacter.
> >> If we want to provide more explicit conversions, then such named
> >> methods should match the definition in IEEE P754.
> >> I don't know how to more fully answer the question.
> To comply with IEEE P754, we must provide conversions and they must
> have the (rather poor for Decimal128) semantics given by that standard.
Yes, if you claim 754 compliance then you must have conversions that
are correctly rounded (and this is true for binary Numbers, too).
Correct rounding is good semantics, and also ensures that all
implementations provide precisely the same results, which is also
generally a good thing.
However, there is no 754 requirement whatsoever that *implicit*
conversions be provided, and for several reasons it is probably safer
if they are not. That is a language design choice.
So I think this comes down to 'what to do for Decimal.parse'? There
are two obvious choices:
* Sam's proposal for Decimal.parse is a perfectly good one: if the
source is a binary floating point Number then use the existing
binary64 -> string conversion in ES to get a (decimal) string,
and then convert from that to decimal128 (which should always be
This has the advantage that it is simple to define (given that
binary64 -> string already has a definition) and to implement,
and in the ES3.1 timeframe that may be important.
It has the disadvantage that different implementations can give
different results for the conversion. This is 'ugly', but may
not be important given that one is starting from a binary64.
* The alternative is to do a correctly-rounded conversion. This is
equally simple to define but a little harder to implement (in
that it does not already exist in ES implementations). However,
Java already has this in the BigDecimal class, and there are also
now hardware implementations, so it's not a showstopper. Also,
this could also provide or use a correctly-rounded (or exact) and
sign-preserving binary64 -> string conversion.
The advantage of the second approach is that ES would have
754-compliant binary -> decimal conversions from day 1, and at the
same time could provide the enhanced conversion for binary -> string.
The first approach would need the addition of a different method
later (Decimal.correctparse?) to comply. But the first approach may
be more in the spirit of ES.
[And thanks to Ingvar -- I think I'm now no longer in 'digest' mode :-)]
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
More information about the Es-discuss