ES Decimal status

Sam Ruby rubys at intertwingly.net
Wed Sep 24 09:18:02 PDT 2008


Brendan Eich wrote:
> On Sep 24, 2008, at 8:28 AM, Sam Ruby wrote:
> 
>>>> My apologies.  That wasn't the question I was intending.
>>>>
>>>> Can you identify code that today depends on numeric binary 64 floating
>>>> point which makes use of operations such as unrounded division and
>>>> depends on trailing zeros being truncated to compute array indexes?
>>>>
>>>> I would think that such code would be more affected by factors such as
>>>> the increased precision and the fact that 1.2-1.1 produces
>>>> 0.09999999999999987 than on the presence or absence of any trailing
>>>> zeros.
>>>>
>>>> But given the continued use of words such as "broken" and "unusable",
>>>> I'm wondering if I'm missing something obvious.
> 
> 
> The only thing that might be called obvious here is the invariant 
> (modulo teeny tiny numbers) that people have pointed out (a === b => 
> o[a] is o[b]). Such JS invariants tend to be required by web content. 
> Postel's Law means you accept everything that flies in ES1, and have 
> trouble being less liberal in ES2+. Web authors crawl the feature vector 
> space and find all the edges, so at least what you did not accept in v1 
> becomes law. But these are generalizations from experience with 
> invariants such as typeof x == "object" && !x => x === null and typeof x 
> == typeof y => (x == y <=> x === y).
> 
> Beyond this conservatism in breaking invariants based on experience, it 
> turns out that % and / results do flow into array indexes. From 
> SunSpider's 3d-raytrace.js (which came from some other benchmark suite, 
> IIRC):
> 
> // Triangle intersection using barycentric coord method
> function Triangle(p1, p2, p3) {
>     var edge1 = sub(p3, p1);
>     var edge2 = sub(p2, p1);
>     var normal = cross(edge1, edge2);
>     if (Math.abs(normal[0]) > Math.abs(normal[1]))
>         if (Math.abs(normal[0]) > Math.abs(normal[2]))
>             this.axis = 0;
>         else
>             this.axis = 2;
>     else
>         if (Math.abs(normal[1]) > Math.abs(normal[2]))
>             this.axis = 1;
>         else
>             this.axis = 2;
>     var u = (this.axis + 1) % 3;
>     var v = (this.axis + 2) % 3;
>     var u1 = edge1[u];
>     var v1 = edge1[v];
>     . . .
> }
> 
> Triangle.prototype.intersect = function(orig, dir, near, far) {
>     var u = (this.axis + 1) % 3;
>     var v = (this.axis + 2) % 3;
>     var d = dir[this.axis] + this.nu * dir[u] + this.nv * dir[v];
>     . . .
> }
> 
> but the operands of % are integers here. So long as decimal and double 
> don't change the results from being integral, these use-cases should be 
> ok (if slower).
> 
> The concern remains, though. Not due to power-of-five problems that 
> would lead to 0.09999999999999987 or the like, but from cases where a 
> number was spelled with extra trailing zeros (in external data, e.g. a 
> spreadsheet) but fits in an integer, or otherwise can be expressed 
> exactly using powers of two. The burden of proof here is on the 
> invariant breaker :-/.

I fully appreciate the need for a high bar here.

The problem here is that there are two invariants.  === to a high degree 
of accuracy today is an eq operator.  But people have argued against 1.2 
!== 1.20, because of another invariant:

a == b && typeof(a) === typeof(b) implies a === b

We can't satisfy both.  My initial preference was that 1.2 !== 1.20, but 
as we are not aware of code that uses fractional indexes, but are aware 
of code that does generic typeof and equality testing, I would think 
that the latter would have a higher weight.

> /be

- Sam Ruby




More information about the Es-discuss mailing list