[whatwg] Flow Control on Websockets

William Chan (陈智昌) willchan at chromium.org
Thu Oct 17 18:36:48 PDT 2013

Hi, Chromium network dev here. I'm not an expert on the WebSocket API.
Pardon any stupid comments :)

On Thu, Oct 17, 2013 at 9:56 AM, Nicholas Wilson <
nicholas at nicholaswilson.me.uk> wrote:

> Hello Michael,
> If you're at all interested in the freshness of the data, you don't
> want to use TCP as your sole flowcontrol mechanism. It's fine for bulk
> file transfers, but think how many megabytes of buffering there can be
> - the sum of all the send buffers of all the intermediaries along the
> chain. On a low-loss network, the TCP window size will become very
> large. You quickly get to a point where the server's filled up all the
> buffers along the way, fine for file transfer, but potentially
> seconds'-worth of latency.

While what you say is true about bufferbloat and low latency interactivity,
I generally consider that as orthogonal to flow control. As per
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Flow_control, I
think of flow control as preventing the sender from sending so much data
that the receiver cannot process it quickly enough. The receiver side
solution is to buffer, but since buffers are a finite resource, you can't
buffer forever. Minimizing queueing delays as a result of bufferbloat is
not something that flow control is intended to solve.

> So, you always need an application-level windowing setup for
> interactive flows. Just sending until the socket blocks will cause a
> backlog to build up.

I don't know if I agree with this. Web browsing over HTTP/1.X doesn't use
application layer flow control, browsers rely on TCP for this. I consider
web browsing interactive. If your point is that bufferbloat is widespread
and applications should be careful about self-inducing queueing delays,
then I agree.

> Your second question is whether it's possible to stop the browser
> reading from the socket. Yes, just don't return from your onmessage
> handler until you've actually finished handling the message. If you
> fire up a new worker then tell the browser you're done, you're seeing
> the inevitable result of that. You have to wait on the worker - or, if
> you want to process say four messages in parallel, wait on the worker
> pool until it's dropped below four active before returning.

Pardon the ignorance, but doesn't the onmessage handler run on the main
thread? If you don't return from it, then doesn't that block the main
thread? If so, then I think that's a bad solution. Blocking the main thread
is generally terrible IMO and should be avoided.

> Implementing some flow control messages is not a bad thing at all. TCP
> is there to prevent traffic disaster, not to guarantee success.

Since TCP already provides flow control, I would think that for most
developers, it'd be fairly convenient to leverage that. Although perhaps if
you're at the point that you actually care about flow control and OOM, you
might be close to the point that you want an application layer flow control

> Nick
> -----
> Nicholas Wilson: nicholas at nicholaswilson.me.uk
> On 17 October 2013 17:29, Michael Meier <mm at sigsegv.ch> wrote:
> > Hey
> >
> > This message is going to be a slight rewording and generalization of a
> > question I asked yesterday on Stack Overflow. The SO question is here:
> >
> http://stackoverflow.com/questions/19414277/can-i-have-flow-control-on-my-websockets
> >
> > Suppose I have a Websocket server and a WS client connected to it. The
> > client is a JS script in a browser offering the standard WS API. The
> server
> > produces data at a rate r1 and sends it to the client, which is able to
> > meaningfully process data at a rate of r2, where r1 > r2.
> >
> > The JS script registers an onmessage handler and is called every time the
> > browser receives a message from the WS. Even if the JS script is still
> busy
> > processing the received message, say over a chain of asynchronous calls,
> the
> > browser might receive the next message and call onmessage again. For the
> > script, there are two options to proceed. The first option is to drop
> data.
> > This might not be possible in all applications and is also a shame, since
> > the data has already been transported over the network. The second
> option is
> > to buffer the data. This is not a real option, though, since it will
> buffer
> > an ever increasing amount of data because r1 > r2.
> >
> > If the JS script had a mechanism to tell the browser not to read further
> > data from the TCP socket underlying the WS, TCP backpressure would
> naturally
> > build up to the server.
> >
> > On the sending side of the browser, flow control seems to be possible by
> > using the bufferedAmount attribute to decide when to pause and resume
> > sending of data.
> >
> >
> > Why is there such an assymetry between sending an receiving? Is it
> possible
> > to have flow control on the receiving side without resorting to
> application
> > level handshakes?*
> >
> >
> > Cheers,
> > Michael
> >
> > * Which would reimplement large parts of TCP. Which is a shame to do when
> > already running on a TCP connection and also a Bad Idea(TM).

More information about the whatwg mailing list