[whatwg] WebSocket bufferedAmount includes overhead or not

Perry Smith pedzsan at gmail.com
Thu Mar 25 10:02:08 PDT 2010

On Mar 25, 2010, at 10:55 AM, Anne van Kesteren wrote:

> On Thu, 25 Mar 2010 16:35:19 +0100, Olli Pettay <Olli.Pettay at helsinki.fi 
> > wrote:
>> On 3/25/10 4:25 PM, Niklas Beischer wrote:
>>> Easy. The bufferedAmount is: "The amount of bytes waiting to be
>>> transferred, including protocol overhead".
>> That doesn't define exactly what the bufferedAmount means and what
>> kinds of values are expected.
>> What I'd expect the API to tell me is, for example, that if I call
>> ws.send(foo), and nothing is yet sent over the network, what is the
>> exact value of bufferedAmount.
>> Again, I'd really wish to keep the API to work reasonable similar
>> way as XHR+progress events where protocol overhead isn't reported.
> Why? Progress events are completely different from this. This is  
> about not saturating the network with too much data; it makes sense  
> if the actual amount of data that is going to hit the network is  
> known. (Yes, I changed my mind.)


I'm going to wade in here but I may be way off base.  I looked rather  
quickly at the doc on w3.com.  Given that, here are my thoughts:

I do kernel level networking support as my day job.  Trying to get  
javascript to not saturate the network is not going to work.  There  
are vast amounts of technology that govern how data flows in a network  
and a primitive javascript app is woefully under equipped.

Even if "Quality of Service" is the objective, javascript would be a  
poor place to put your hopes.

I like the idea of bufferedAmount but it could almost be "arbitrary  
units" -- just something that the app can determine "Hey!  I'm not  
making any progress" or perhaps give the javascript a way to keep the  
users updated as to the progress.  But making this into a way to not  
saturate the network is not going to work.

Indeed, if the javascript wants to try to do some type of quality of  
service, then the only way for it to do that would be to send data,  
watch until bufferedAmount goes to zero, then pause for some length of  
time to "un-saturate" the network..  As soon as something is queued up  
(sent), everything below is trying its best to send it out as fast as  
possible.  Watching bufferedAmount isn't going to change the lower  
levels of the network stack.  So, again, the units could be  
arbitrary.  If the script knows it sent N bytes, and it took X time,  
it knows how much bandwidth it is getting.  If it wants to play nice,  
it can calculate how much time to pause based only on those two  

As far as adding in any protocol overhead, there is no way an  
application is going to know what that is unless you split the  
protocol stack at some point.  I don't see why the application level  
would want to know anything about the lower level protocol.  At the  
same time, if an implementation wants to add in some of the overhead  
that it sees, that is still going to give the application all the  
tools it needs to implement whatever it can.

I would focus on words like "monotonically decreasing" after a send.   
And "eventually ends up at zero".  Those two things are what I'd like  
to be sure are true.


More information about the whatwg mailing list