Code points vs Unicode scalar values

Anne van Kesteren annevk at
Wed Sep 11 03:43:55 PDT 2013

On Thu, Sep 5, 2013 at 10:08 PM, Brendan Eich <brendan at> wrote:
> Thanks for the reminders -- we've been over this.

It's not clear the arguments were carefully considered though. Shawn
Steele raised the same concerns I did. The thread also
suggests that the ideal value space for a string is Unicode scalar
values (i.e. what utf-8 can do) and not code points. It did indeed
indicate they have code points because of legacy, but JavaScript has
16-bit code units due to legacy. If we're going to offer a higher
level of abstraction over the basic string type, we can very well make
that a utf-8 safe layer.

If you need anything for tests, you can just ignore the higher level
of abstraction and operate on 16-bit code units instead.


More information about the es-discuss mailing list