Out-of-range decimal literals

Sam Ruby rubys at intertwingly.net
Thu Sep 18 06:49:29 PDT 2008


On Thu, Sep 18, 2008 at 9:10 AM, Igor Bukanov <igor at mir2.org> wrote:
> Should EcmaScript allow decimal literals that cannot be represented as
> 128-bit decimal? I.e should the following literals give a syntax
> error:
>
> 1.000000000000000000000000000000000000000000000000001m
> 1e1000000000m ?
>
> IMO allowing such literals would just give another source of errors.

Languages have "personalities", and people build up expectations based
on these characteristics.  As much as possible, I'd like to suggest
that ECMAScript be internally consistent, and not have one independent
choice (binary vs decimal) have unexpected implications over another
(signaling vs quiet operations).

As a tangent, both binary 64 and decimal 128 floating point provide
"exact" results for a number of operations, they simply do so for
different domains of numbers.  2**-52 can be represented exactly, for
example, in binary 64 floating point, but not in decimal 128 floating
point.  It is only the prevalence of things like decimal literals,
which naturally are in decimal, which tend to produce inexact but
correctly rounded values in binary 64 and exact values in decimal 128,
without a need for rounding.

As to your specific question, here's a few results from my branch of
SpiderMonkey:

js> 1.000000000000000000000000000000000000000000000000001
1
js> 1e1000000000
Infinity
js> 1.000000000000000000000000000000000000000000000000001m
1.000000000000000000000000000000000
js> 1e1000000000m
Infinity

> Regards, Igor

- Sam Ruby


More information about the Es-discuss mailing list