[whatwg] onclose events for MessagePort

Ian Hickson ian at hixie.ch
Fri Dec 13 15:29:45 PST 2013


On Wed, 11 Dec 2013, Jonas Sicking wrote:
> On Wed, Dec 11, 2013 at 3:54 PM, Ian Hickson <ian at hixie.ch> wrote:
> > I really don't understand why we'd want that level of complexity for 
> > something that we can solve with just a single event handler, though. 
> > The problem, as described, is just "we need to know when a remote port 
> > is killed due to an OOM error". The event handler does this. Why do we 
> > need to complicate it with two-way metadata messages to pin workers 
> > and so on? I really don't understand what that gets us.
> 
> Starting with this since there seems to be misunderstanding about use 
> cases.
> 
> The use case here is a page (or a worker) sending a message through a 
> message port and wanting to know if the other side "disappears" before 
> it is able to respond.
> 
> There are a few scenarios where the responder could disappear.
> 1. The process where the responder lives could crash.
> 2. The responder could be running in a dedicated Worker which was
> terminated through Worker.terminate().
> 3. The responder could be a web page that the user navigates away from.
> 4. The responder could be a dedicated Worker which is owned by a web
> page that the user navigates away from.

Well, if we want to support these then I agree that the only workable 
solution is one where you can essentially buy a lock that prevents the 
other side from getting bfcached until it's replied, such that if it is 
suspended (for workers) or navigated (for browsing contexts) before it 
receives the message saying the lock is released, it instead just gets 
discarded permanently.

It does seem pretty lame that we would allow any random page you 
communicate with to be able to prevent you from bfcaching, but I don't 
know what we do about that.


> So I hope that describes the use case, and that it also describes the 
> dedicated workers issue.

I did actually consider these use cases, I just didn't address them 
because exposing the bfcache logic or allowing pages to prevent bfcaching 
seemed like a bad idea:

   http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Dec/0062.html


> No sync IPC needed. When a port is pinned, you send an async message to 
> the process which contains the page for the "other side". When that 
> process receives the message you check if the page is currently being 
> displayed.
> 
> If the page has been completely torn down then you send a message back 
> saying that the page is dead and that the promise created during pinning 
> should be rejected.
> 
> If the page is sitting in the bfcache, you remove it from the bfcache 
> and send a message back saying that the page is dead and that the 
> promise created during pinning should be rejected.
> 
> If the page is displayed, then you add a flag indicating that if the 
> page is navigated away from, it should not go into the bfcache and that 
> we should send a signal to reject the promise.
> 
> Obviously if the process had crashed before we were able to process the 
> event, you send a message back to reject the promise.
> 
> The same thing is done when unpinning. You send a message to the other 
> side saying that it's getting unpinned.

This means that it's possible to get a lock, have the other side navigate 
then go back, then have the other side receive the notification for the 
lock. It's this that you need blocking IPC to prevent. But I guess we 
could live with that just being possible.


> > I strive for a higher bar than just "no objections". Certainly though 
> > I welcome further input from everyone else here.
> 
> Your proposal doesn't even meet this bar.

True. A better way of putting it is that I strive for an orthogonal bar. 
But that's another story.


> But more importantly, your argument that it's too burdensome to 
> implement doesn't carry much weight when no actual implementors has 
> expressed that concern.

This I disagree with. I have many times been in situations where we've 
discussed something, come up with a solution, had nobody object, specced 
it, implemented it, shipped it, and then years later had other 
implementors complain about the complexity.

If something can be solved with a simple event, then having a complex API 
is not a good idea.

In this case, though, if we want to solve all the problems you listed, a 
simple event isn't going to cut it, indeed.


Anyway.

If we do want to address use cases 3 and 4 above, then I agree that the 
event in the spec today isn't sufficient.

I'm not sure a promise makes a lot of sense as an alternative, though. To 
make that work, you'd need to have the UA reject the promise but the 
author resolve it, which seems kind of unusual for promise-based APIs. You 
also end up, as author, being simultaneously the consumer of the promise 
and the resolver, which is a bit weird too.

Also, I'm not sure it really solves all the use cases. As an author, one 
of the ways in which I use channels (in any environment, so not just those 
that use MessagePorts, but also e.g. linux pipes, WebSockets, etc) is as a 
notification mechanism. As David Barrett-Kahn implied earlier, it's no 
good if you're listening to notifications, and then they stop, but you 
never get told that they've stopped. The pinning mechanism would basically 
require authors who are consuming such streams from other tabs to pin when 
they opened the channel, and leave it pinned forever. To detect when the 
channel is dead, they'd have to listen for rejection on the promise, an 
object separate from the port. This seems unintuitive.

But in such a scenario, would you even _want_ to block bfcaching? You'd 
want to know when it was _never_ coming back, as in worker.terminate(). 
But if it went into the bfcache, don't you really just want to receive a 
message saying "I'm actually not around right now"?

Maybe, and I'm just brainstorming here, we should deal with the bfcache 
differently than termination/oom/crash. How about we provide an API that 
defines an "autoresponder" message that gets sent when a port receives a 
message but isn't around to handle it? Or maybe an API that defines a 
message to send when the worker is suspended / tab is navigated away from, 
and another that sends a message when the worker/port is resumed. Or maybe 
instead of an API that defines a prerecorded message, we just fire a 
different event. So you'd have:

   port.onmessage - received a message
   port.onerror - other side crashed or was terminate()d
   port.onsuspend - other side went to sleep
   port.onresume - other side came back

Then you'd never have to actually register intent, as with the promise, 
and we'd never actually block bfcaching, and these messages would always 
be one-shot messages, no bidirectional IPC to manage it.

Looking at your use cases again:

> 1. The process where the responder lives could crash.
> 2. The responder could be running in a dedicated Worker which was
> terminated through Worker.terminate().

'error' event gets fired on other side.

> 3. The responder could be a web page that the user navigates away from.
> 4. The responder could be a dedicated Worker which is owned by a web
> page that the user navigates away from.

'suspend' event gets fired on other side.

MessagePorts used for request-response: you register onerror and onsuspend 
to a handler that discards the connection.

MessagePorts used for listening to notifications: you register onerror to 
discard the connection, and onsuspend/onresume to just mark the other side 
as unavailable at this time.

It's not perfect, e.g. you never find out if the other side suspended 
forever or not, but that's not worse than with the promise model (in that 
model, instead of finding out the other side suspended forever, you 
actually prevent it from ever suspending, which seems net worse).

What do you think?

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



More information about the whatwg mailing list