Re: endianness (was: Observability of NaN distinctions — is this a concern?)

Vladimir Vukicevic vladimir at
Sun Mar 31 13:37:45 PDT 2013

(Apologies for breaking threading -- subscribed too late to have 
original message to reply to.)

David Herman wrote:
> On Mar 27, 2013, at 6:51 AM, Andreas Rossberg <rossberg at  <>> wrote:
> >/  There actually are (third-party) projects
> />/  with ports of V8 and/or Chromium to big endian architectures.
> /
> It would be helpful to have more information about what these platforms and projects are.

The Wii-U is probably the highest profile example of this; PowerPC base, 
and they're doing a bunch of HTML5 apps stuff like what they announced 
at GDC.

> >/  WebGL
> />/  code should not break or become prohibitively expensive on them all of
> />/  a sudden.
> /
> But WebGL code doesn't break either way. It's if we *don't* mandate little-endian that
> code breaks. As for the expense, it has to be weighed against content breaking. Not to
> mention the burden it places on developers to write portable code without even having
> big-endian user agents to test on. (I suppose they could use DI and shim the typed array
> constructors with simulated big-endian versions. Ugh...)

The problem, as I see it at least, is that if little-endian is mandated, 
then effectively we *have* broken WebGL, or the possibility o ever 
having performant WebGL on those platforms.  The underlying OpenGL API 
will always be native-endian.  While big-endian processors often do have 
ways of efficiently doing byte-swapped loads and stores, that doesn't 
apply to passing bulk data down.  For example, vertex skinning is the 
base method of doing skeletal animation.  It's often done on the CPU, 
and it involves transforming a bunch of floating point data.  The result 
is then uploaded to the GPU for rendering.

If typed arrays are fixed to be little endian, that means that on big 
endian platforms one of two things will need to happen:

- the application will need to manually byte swap (in JS) by aliasing 
the Float32Array as a UInt8Array and twiddling bytes.
- the WebGL implementation will need to make a copy of every incoming > 
1 byte element size buffer and do the byte swapping before passing it 
down to GL -- it can either allocate a second buffer, swap into it, and 
then throw it away, or it can swap before the GL call and unswap after 
the GL call in-place.

Both of these are essentially murder for performance; so by attempting 
to prevent code from breaking you're basically guaranteeing that all 
code will effectively break due to performance -- except that developers 
have no option to write portable and performant code.

The other thing is, I suspect that a large chunk of code using typed 
arrays today will work just fine on big endian platforms, provided that 
the arrays are defined to be native-endian.  Very few code actually 
aliases and does munging; the only issues might come up with are loading 
data from the network, but those are present regardless.

     - Vlad

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the es-discuss mailing list