[whatwg] BWTP for WebSocket transfer protocol

Jonas Sicking jonas at sicking.cc
Wed Aug 12 00:13:29 PDT 2009


On Tue, Aug 11, 2009 at 7:46 PM, Greg Wilkins<gregw at mortbay.com> wrote:
> Jonas Sicking wrote:
>> Can you suggest changes to the WS protocol that would make it a better
>> general-purpose protocol?
>
> There were several threads on the IETF HYBI mailing list with some such
> proposals:
>
>  http://www.ietf.org/mail-archive/web/hybi/current/maillist.html
>
> An example of such a message is at the bottom of this email.
> However, the response to such proposals was pretty much that
> they were too complex and not needed for the ws API.

I did follow that list somewhat, however to be honest the noise to
signal ratio was too low. A lot of suggestions/complaints just didn't
feel practical.

> It was the result of those interactions that suggested to me
> that a bidirectional web protocol would be best developed
> at arms length to the websocket API, and thus the BWTP
> effort was born.

I'm glad to see that.

> So far the feedback I have received on BWTP is suggesting
> that it has perhaps gone a little too far the other way
> and that there are probably some significant simplifications
> that can be achieved without greatly restricting the feature
> set.

That is my impression too.

>> You've suggested multiplexing, segmentation,
>> per-frame mime-type and per-frame meta-data so far. Is there anything
>> else that is needed? It would also be good to know what use cases you
>> have in mind for all of these features in order to evaluate them.
>
> Predicting the future is always hard, but using the present
> as an indicator is good start.
>
> Currently the majority of the web traffic is carried over HTTP
> which is capable of multiplexing, segmentation, per-frame mime-type
> and per-frame meta-data.
>
> I don't see why adding bidirectional capability should result in any
> significant reduction in these capabilities of web transports.
>
> For example, HTTP can well transport a vast array of content types
> with meta data support to negotiate accepted languages, types and
> encodings.
>
> The ws API can only handle UTF-8 text datagrams, so as a result
> the ws protocol has special case handling for UTF-8 text datagrams.
>
> So I think that our starting point should be to develop a
> bidirectional protocol that can well support the current web
> transport capabilities.   I would say that anybody
> who wishes to advocate a less capable transport should
> be ask to make the case of why capabilities should be
> lost with bidirectional protocols.

I agree we should use the experiences from HTTP. However it seems like
we have different experiences.

For example mime-types in HTTP have a very troubled record. Look at
Adam Barth's draft [1] for what browsers are forced to do to stay
compatible with the web. And the problem keeps persisting, for example
the newly deployed downloadable fonts support in firefox completely
ignores mimetypes for fonts since no mime type exists for fonts. And
with video the mimetype situation is very complicated since there's
both a type of container, as well as type of encoding for video and
audio inside that needs to be described. And potentially all three
types can be independent.

Similarly content negotiation is something I would say is even more
doubtful that it has provided any value. The only site where I can
remember seeing content negotiation actually used is on w3.org, an
organization that is safe can be considered experts on web standards.
However even here things immediately failed. When firefox started
claiming that we supported application/xml, several urls stopped
working since the browser was sent the XML file used to generate the
specification, rather than something that actually usefully could be
rendered.

Similarly, how many URLs have you seen that look like
http://something.com/.../feed?format=atom or format=rss, something
that content negotiation was supposed to handle.

I'm sure there are many sites that use content negotiation, and that
do so successfully. However given the trillion or so pages on the web,
could it really be said that content negotiation is something that's
used often enough to considered as a successful feature? I.e. was the
time spent specifying and implementing it really worth it? And are the
63 bytes of data that Firefox adds to each request well spent?

So while I agree that we should look towards existing protocols, we do
need to be honest about what has been successful and what hasn't, and
not just assume that of existing protocols do something, it's a good
idea.

[1] http://www.ietf.org/id/draft-abarth-mime-sniff-01.txt

> Example proposal to improve websocket protocol that
> was rejected:
>
> Greg Wilkins wrote:
>>> It would be great if the websocket proposal could include
>>> standard definitions for mime encoded datagrams.
>>>
>>> Current frame types are:
>>>
>>>   0x00  - sentinel framed UTF-8 message
>>>   0x80  - length framed binary data.
>>>
>>> I'd like to see two additional frame types supported
>>> by default:
>>>
>>>   0x01  - sentinel framed UTF-8 encoded MIME message
>>>   0x81  - length framed MIME message.
>>>
>>> Both these data types would contain a data that commenced
>>> with a standard mime header (RFC 2045).   The header is optional
>>> and terminated by CR LF CR LF.  Thus these types have a minimal
>>> overhead of 4 bytes.
>>>
>>> For both these types, any Content-Length header will be
>>> ignored and the length indicated by the websocket framing
>>> minus the header length will be used.
>>>
>>> For 0x01 types the content type is assumed to be "text/plain; charset=utf-8"
>>> If a content type header is specified, it must be "text/????; charset=utf-8"
>>>
>>> For 0x81 the content type is assumed to be application/octet-stream unless
>>> otherwise indicated.
>>>
>>> The websocket API would need to be slightly extended to support some
>>> common types of message.
>>>
>>> I would suggest that onmessage always be called for all text
>>> mime types, but with some additional parameters: eg.
>>>
>>>   onmessage(text,mimetype,headers)
>>>
>>> The browser would be responsible for converting the transported
>>> charset to the charset of javascript. If the conversion could not
>>> be done, then the message would be discarded.
>>>
>>> Additional events could be supported if you want the browser/server
>>> to do the parsing for your.   For text/xml & text/html:
>>>
>>>   ondocument(dom,headers)
>>>
>>> and for text/json
>>>
>>>   onobject(object,headers)
>>>
>>>
>>> To send such messages, the API would also need to support
>>>
>>> void postMessage(data,headers);
>>>
>>>
>>>
>>> I think this is a minimal change to websocket and would go a long
>>> way to address many of the concerns raised here.    With the ability
>>> to send standardized meta data, then the job of coming up with
>>> standardized multiplexing is much much simpler.

If we want mime-support this seems like a good proposal. Except I
don't understand the point of having the mime header optional? If
someone wants to send something without a mimetype it seems like the
other frame types cover that.

I'm curious to hear what you consider the advantages of this over
simply transmitting for example JSON over a "sentinel framed UTF-8
message" frame? I.e. can you describe an application that would send
JSON using the above proposal.

/ Jonas



More information about the whatwg mailing list