[whatwg] Flow Control on Websockets
Michael Meier
mm at sigsegv.ch
Thu Oct 17 09:29:42 PDT 2013
Hey
This message is going to be a slight rewording and generalization of a
question I asked yesterday on Stack Overflow. The SO question is here:
http://stackoverflow.com/questions/19414277/can-i-have-flow-control-on-my-websockets
Suppose I have a Websocket server and a WS client connected to it. The
client is a JS script in a browser offering the standard WS API. The
server produces data at a rate r1 and sends it to the client, which is
able to meaningfully process data at a rate of r2, where r1 > r2.
The JS script registers an onmessage handler and is called every time
the browser receives a message from the WS. Even if the JS script is
still busy processing the received message, say over a chain of
asynchronous calls, the browser might receive the next message and call
onmessage again. For the script, there are two options to proceed. The
first option is to drop data. This might not be possible in all
applications and is also a shame, since the data has already been
transported over the network. The second option is to buffer the data.
This is not a real option, though, since it will buffer an ever
increasing amount of data because r1 > r2.
If the JS script had a mechanism to tell the browser not to read further
data from the TCP socket underlying the WS, TCP backpressure would
naturally build up to the server.
On the sending side of the browser, flow control seems to be possible by
using the bufferedAmount attribute to decide when to pause and resume
sending of data.
Why is there such an assymetry between sending an receiving? Is it
possible to have flow control on the receiving side without resorting to
application level handshakes?*
Cheers,
Michael
* Which would reimplement large parts of TCP. Which is a shame to do
when already running on a TCP connection and also a Bad Idea(TM).
More information about the whatwg
mailing list