[whatwg] Endianness of typed arrays

Jonas Sicking jonas at sicking.cc
Wed Mar 28 02:22:29 PDT 2012


On Wed, Mar 28, 2012 at 2:13 AM, Boris Zbarsky <bzbarsky at mit.edu> wrote:
> On 3/28/12 2:04 AM, Jonas Sicking wrote:
>>
>> Consider a big-endian platform where both the CPU and the GPU is
>> big-endian. If a webpage writes 16bit data into an ArrayBuffer and
>> then sends that off to the GPU using WebGL, the data had better be
>> sent in big-endian otherwise the GPU will interpret it wrong.
>>
>> However if the same page then writes some 16bit data into an
>> ArrayBuffer and then looks at its individual bytes or send it across
>> the network to a server, it's very likely that the data needs to
>> appear as little-endian or site logic might break.
>>
>> Basically I don't know how one would write a modern browser on a
>> big-endian system.
>
>
> What one could do is to store the array buffer bytes always as little
> endian, and then if sending to the GPU byte-swap as needed based on the API
> call being used (and hence the exact types the GPU actually expects).
>
> So basically, make all JS-visible state always be little-endian, and deal in
> the one place where you actually need native endianness.
>
> I believe that was substantially Robert's proposal earlier in this thread.

Except if the data was written in 32bit units you do a different byte
swapping than if the data was written as 16bit units. And remember
that data can be written with different unit sizes in different parts
of the ArrayBuffer.

The typed-array spec was specifically designed for use cases like
sending buffers containing data in patterns like "32bit data, 16bit
data, 16bit data, 32bit data, 16bit data, 16bit data...". Keeping
track of all that seems prohibitively expensive, but maybe I'm being
pessimistic.

/ Jonas



More information about the whatwg mailing list