[whatwg] Flow Control on Websockets

Nicholas Wilson nicholas at nicholaswilson.me.uk
Thu Oct 17 09:56:15 PDT 2013

Hello Michael,

If you're at all interested in the freshness of the data, you don't
want to use TCP as your sole flowcontrol mechanism. It's fine for bulk
file transfers, but think how many megabytes of buffering there can be
- the sum of all the send buffers of all the intermediaries along the
chain. On a low-loss network, the TCP window size will become very
large. You quickly get to a point where the server's filled up all the
buffers along the way, fine for file transfer, but potentially
seconds'-worth of latency.

So, you always need an application-level windowing setup for
interactive flows. Just sending until the socket blocks will cause a
backlog to build up.

Your second question is whether it's possible to stop the browser
reading from the socket. Yes, just don't return from your onmessage
handler until you've actually finished handling the message. If you
fire up a new worker then tell the browser you're done, you're seeing
the inevitable result of that. You have to wait on the worker - or, if
you want to process say four messages in parallel, wait on the worker
pool until it's dropped below four active before returning.

Implementing some flow control messages is not a bad thing at all. TCP
is there to prevent traffic disaster, not to guarantee success.


Nicholas Wilson: nicholas at nicholaswilson.me.uk

On 17 October 2013 17:29, Michael Meier <mm at sigsegv.ch> wrote:
> Hey
> This message is going to be a slight rewording and generalization of a
> question I asked yesterday on Stack Overflow. The SO question is here:
> http://stackoverflow.com/questions/19414277/can-i-have-flow-control-on-my-websockets
> Suppose I have a Websocket server and a WS client connected to it. The
> client is a JS script in a browser offering the standard WS API. The server
> produces data at a rate r1 and sends it to the client, which is able to
> meaningfully process data at a rate of r2, where r1 > r2.
> The JS script registers an onmessage handler and is called every time the
> browser receives a message from the WS. Even if the JS script is still busy
> processing the received message, say over a chain of asynchronous calls, the
> browser might receive the next message and call onmessage again. For the
> script, there are two options to proceed. The first option is to drop data.
> This might not be possible in all applications and is also a shame, since
> the data has already been transported over the network. The second option is
> to buffer the data. This is not a real option, though, since it will buffer
> an ever increasing amount of data because r1 > r2.
> If the JS script had a mechanism to tell the browser not to read further
> data from the TCP socket underlying the WS, TCP backpressure would naturally
> build up to the server.
> On the sending side of the browser, flow control seems to be possible by
> using the bufferedAmount attribute to decide when to pause and resume
> sending of data.
> Why is there such an assymetry between sending an receiving? Is it possible
> to have flow control on the receiving side without resorting to application
> level handshakes?*
> Cheers,
> Michael
> * Which would reimplement large parts of TCP. Which is a shame to do when
> already running on a TCP connection and also a Bad Idea(TM).

More information about the whatwg mailing list