value objects

Allen Wirfs-Brock allen at
Tue Mar 20 15:55:09 PDT 2012


I don't actually see how this is substantially different, in concept from Sam Ruby's decimal work.  The main difference is that you have many more types all of which have to be dealt with (including coercions/promotions, etc.) I don't see why you aren't going to run into exactly the same issues (but more so, because of multiple types) that Sam did.  Of course, I don't think Sam's issues were insurmountable, we just couldn't deal with them with in a timely manner.  The biggest concern, for me, was that all the issue were handled in a single special case basis. Your proposal is better in that you try to cover most of the common special cases. But that just makes the effort bigger while still leaving fringe cases unsupported (you mention complex and rationales as likely past the cut line).

When we abandoned the decimal work, we said we  hoped we could find a generalization that would permit a more open ended set of value types.  I don't think we should give up on that before we have really tried.  Conversion of operators into method calls and dynamic double dispatch for operand type promotion are well understood techniques. I've noodled around with these concepts for ES and I don't see why they wouldn't work just fine.  When I get a chance I can walk you through how it could work.

To me, your key insight seems to be that operations upon well specified immutable (include key methods)  object types (I hate to use that term because it is so loaded) can be optimized in a fairly straightforward manner.  We probably should all be saying to ourselves, well of course... However, we have a history wanting to keep built-in objects as mutable as possible so we tend to not think about new built-ins as generally being harden in this manner.  Perhaps we should.

Of course, well specified, immutable, hardened built-in numeric object of various varieties can be optimized just as easily in the context of an open-ended extensible design.

I agree that this is an area where we should be applying on going attention, but I don't think we need to close the door on an extensible set of numeric types yet. 


On Mar 19, 2012, at 9:04 PM, David Herman wrote:

> I had a great conversation today with my colleagues Michael Bebenita and Shu-Yu Guo, and we came up with what I think is a nicely conservative way to add new kinds of numbers to JS without breaking the intuition that JS has only one type of primitive numbers.
> tl;dr: Pure, immutable objects that can be optimized to unboxed integers.
> Examples:
>    let x = uint64(17);
>    let y = uint64(17);
>    console.log(x === y)            // true
>    console.log(typeof x)           // "object"
>    console.log(Object.isValue(x))  // true
>    console.log(Object.isValue({})) // false
>    function factorial(n) {
>        return n <= 1 ? bignum(1) : n * factorial(n - 1);
>    }
>    console.log(factorial(bignum(500)));
>    // 122013682599111006870123878542304692625357434280319284219241
>    // 358838584537315388199760549644750220328186301361647714820358
>    // 416337872207817720048078520515932928547790757193933060377296
>    // 085908627042917454788242491272634430567017327076946106280231
>    // 045264421887878946575477714986349436778103764427403382736539
>    // 747138647787849543848959553753799042324106127132698432774571
>    // 554630997720278101456108118837370953101635632443298702956389
>    // 662891165897476957208792692887128178007026517450776841071962
>    // 439039432253642260523494585012991857150124870696156814162535
>    // 905669342381300885624924689156412677565448188650659384795177
>    // 536089400574523894033579847636394490531306232374906644504882
>    // 466507594673586207463792518420045936969298102226397195259719
>    // 094521782333175693458150855233282076282002340262690789834245
>    // 171200620771464097945611612762914595123722991334016955236385
>    // 094288559201872743379517301458635757082835578015873543276888
>    // 868012039988238470215146760544540766353598417443048012893831
>    // 389688163948746965881750450692636533817505547812864000000000
>    // 000000000000000000000000000000000000000000000000000000000000
>    // 0000000000000000000000000000000000000000000000000000000
> By defining new object types that are immutable, implementations are free to choose among many representations. They can copy the data or share references whenever they want. Since the integers encapsulate only their bits, optimizing implementations can use a 64-bit integer payload. And fully unboxed representations can literally be 64 bit integers (at least, in optimized code where types have been properly inferred). Finally, it's possible to partially evaluate constructors like
>    uint64(17)
> because the constructor is pure. So even if we didn't have custom literal syntax like 17u64 (which, incidentally, we could), we should still be able to get the same performance.
> This proposal involves one major change in semantics: the built-in operators have to be overloaded to handle the new types. Rather than go all the way towards user-defined value types, though, I've stuck for now with just a fixed set of built-in ones. It's forward-compatible with user-defined overloading, in case we ever wanted to go all the way.
> But the nice thing is, since they are all just objects, there's no shame in having a multiplicity of new types, including:
> * uint8, int8
> * uint16, int16
> * uint32, int32
> * uint64, int64 <-- hey everyone, look at this, this is the important part, right here
> * bignums
> * IEEE754r decimal
> * complex numbers (if anyone cares?)
> * exact rationals (a bridge too far?)
> For all of these, the answer to `typeof` is still just "object". For the most part, they just feel like a nice "batteries included" library.
> There's a rough draft of the strawman here:
> Comments welcome!
> Dave
> _______________________________________________
> es-discuss mailing list
> es-discuss at

More information about the es-discuss mailing list