[whatwg] Endianness of typed arrays

Kenneth Russell kbr at google.com
Wed Mar 28 13:39:17 PDT 2012


On Wed, Mar 28, 2012 at 12:34 PM, Benoit Jacob <bjacob at mozilla.com> wrote:
> Before I joined this mailign list, Boris Zbarsky wrote:
>> C)  Try to guess based on where the array buffer came from and have
>> different behavior for different array buffers.  With enough luck (or
>> good enough heuristics), would make at least some WebGL work, while also
>> making non-WebGL things loaded over XHR work.
>
> FWIW, here is a way to do this that will always work and won't rely on "luck". The key idea is that by the time one draws stuff, all the information about how vertex attributes use buffer data must be known.
>
> 1. In webgl.bufferData implementation, don't call glBufferData, instead just cache the buffer data.
>
> 2. In webgl.vertexAttribPointer, record the attributes structure (their types, how they use buffer data). Do not convert/upload buffers yet.
>
> 3. In the first WebGL draw call (like webgl.drawArrays) since the last bufferData/vertexAttribPointer call, do the conversion of buffers and the glBufferData calls. Use some heuristics to drop the buffer data cache, as most WebGL apps will not have a use for it anymore.

It would never be possible to drop the CPU side buffer data cache. A
subsequent draw call may set up the vertex attribute pointers
differently for the same buffer object, which would necessitate going
back through the buffer's data and generating new, appropriately
byte-swapped data for the GPU.

>> In practice, if forced to implement a UA on a big-endian system today, I
>> would likely pick option (C)....  I wouldn't classify that as a victory
>> for standardization, but I'm also not sure what we can do at this point
>> to fix the brokenness.
>
> I agree that seems to be the only way to support universal webgl content on big-endian UAs. It's not great due to the memory overhead, but at least it shouldn't incur a significant performance overhead, and it typically only incurs a temporary memory overhead as we should be able to drop the buffer data caches quickly in most cases. Also, buffers are typically 10x smaller than textures, so the memory overhead would typically be ~ 10% in corner cases where we couldn't drop the caches.

Our emails certainly crossed, but please refer to my other email.
WebGL applications that assemble vertex data for the GPU using typed
arrays will already work correctly on big-endian architectures. This
was a key consideration when these APIs were being designed. The
problems occur when binary data is loaded via XHR and uploaded to
WebGL directly. DataView is supposed to be used in such cases to load
the binary data, because the endianness of the file format must
necessarily be known.

The possibility of forcing little-endian semantics was considered when
typed arrays were originally being designed. I don't have absolute
performance numbers to quote you, but based on previous experience
with Java's NIO Buffer classes, I am positive that the performance
impact for WebGL applications on big-endian architectures would be
very large. It would prevent applications which manipulate vertices in
JavaScript from running acceptably on big-endian machines.

-Ken

> In conclusion: WebGL is not the worst here, there is a pretty reasonable avenue for big-endian UAs to implement it in a way that allows running the same unmodified content as little-endian UAs.
>
> Benoit



More information about the whatwg mailing list