Code points vs Unicode scalar values

Anne van Kesteren annevk at annevk.nl
Wed Sep 11 03:43:55 PDT 2013


On Thu, Sep 5, 2013 at 10:08 PM, Brendan Eich <brendan at mozilla.com> wrote:
> Thanks for the reminders -- we've been over this.

It's not clear the arguments were carefully considered though. Shawn
Steele raised the same concerns I did. The unicode.org thread also
suggests that the ideal value space for a string is Unicode scalar
values (i.e. what utf-8 can do) and not code points. It did indeed
indicate they have code points because of legacy, but JavaScript has
16-bit code units due to legacy. If we're going to offer a higher
level of abstraction over the basic string type, we can very well make
that a utf-8 safe layer.

If you need anything for tests, you can just ignore the higher level
of abstraction and operate on 16-bit code units instead.


-- 
http://annevankesteren.nl/


More information about the es-discuss mailing list