[whatwg] onclose events for MessagePort

Jonas Sicking jonas at sicking.cc
Wed Dec 11 16:54:27 PST 2013


On Wed, Dec 11, 2013 at 3:54 PM, Ian Hickson <ian at hixie.ch> wrote:
> I really don't understand why we'd want that level of complexity for
> something that we can solve with just a single event handler, though. The
> problem, as described, is just "we need to know when a remote port is
> killed due to an OOM error". The event handler does this. Why do we need
> to complicate it with two-way metadata messages to pin workers and so on?
> I really don't understand what that gets us.

Starting with this since there seems to be misunderstanding about use cases.

The use case here is a page (or a worker) sending a message through a
message port and wanting to know if the other side "disappears" before
it is able to respond.

There are a few scenarios where the responder could disappear.
1. The process where the responder lives could crash.
2. The responder could be running in a dedicated Worker which was
terminated through Worker.terminate().
3. The responder could be a web page that the user navigates away from.
4. The responder could be a dedicated Worker which is owned by a web
page that the user navigates away from.

In scenario 3 the page could in theory send a message during the
unload handler which indicates "I won't have time to answer your
question because i'm about to get killed".

However that is not possible in any of the other scenarios.

In scenario 4 it's not possible because there is no unload event that
is fired in the worker. And the owning page could not send a message
to the worker saying "you're about to go away" since the worker won't
have time to process that as the UA is about to kill it (or at least
freeze it) due to its owning page being navigated away from.

Even if we did add a unload event to dedicated worker, that still
wouldn't fully solve it for 4 since one of the use cases of dedicated
workers is being able to write code that stays off the event loop for
a long time. Such a script would not have time to receive and process
the unload event before the worker was killed or frozen.

In theory the web developer could try to solve scenario 2 and 4 doing
what's described in [3]. However that seems painful enough that I
don't think we should consider that a solution.

So I hope that describes the use case, and that it also describes the
dedicated workers issue.

>> The proposal at [1] does not prevent bfcaching *anytime* that
>> MessagePorts are alive in a page that is navigated away from. We can
>> always bfcache the page if we don't know of any "other sides" having
>> pinned their port. If we later detect that a port is being pinned, we
>> can indicate an error to the pinning side, and throw the bfcached page
>> out of the bfcache at that time.
>
> It might be too late by then, since you might have already navigated back
> to the bfcached page. This is why the IPC has to be synchronous, at the
> latest during the back() navigation. (Not literally synchronous, I guess,
> since otherwise you'd have trouble handling a mirror image scenario where
> both sides were playing this game, but at least it needs to block the
> history traversal back to a document that is bfcached but for which we may
> not yet have received confirmation that the bfcache is clean.)

No sync IPC needed. When a port is pinned, you send an async message
to the process which contains the page for the "other side". When that
process receives the message you check if the page is currently being
displayed.

If the page has been completely torn down then you send a message back
saying that the page is dead and that the promise created during
pinning should be rejected.

If the page is sitting in the bfcache, you remove it from the bfcache
and send a message back saying that the page is dead and that the
promise created during pinning should be rejected.

If the page is displayed, then you add a flag indicating that if the
page is navigated away from, it should not go into the bfcache and
that we should send a signal to reject the promise.

Obviously if the process had crashed before we were able to process
the event, you send a message back to reject the promise.

The same thing is done when unpinning. You send a message to the other
side saying that it's getting unpinned.

If by the time the message gets there the page has been navigated away
from, and thus torn down already, do nothing. This is an unfortunate
circumstance that the page was torn down unneccesarily. But like I've
said, completely preventing kicking pages out of the bfcache is
something that I don't worry about as an implementor of the only
bfcache implementation, and is also something that can be solved
orthogonally if we really want to.

In theory we can add API to lessen the risk of kicking pages out of
bfcache somewhat here. At the cost of adding complexity. Proposals
welcome.

If the page is still rendered, signal that it is no longer needs to be
prevented from entering bfcache.

If the process has crashed, do nothing.

>> Finally, has any implementation indicated that the proposal in [1] is
>> too burdensome to implement? If they have that of course changes the
>> discussion dramatically here. But I'm reluctant to guess about what
>> implementations will or won't implement. I'd rather ask them.
>>
>> [1] http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2013-October/041250.html
>
> I strive for a higher bar than just "no objections". Certainly though I
> welcome further input from everyone else here.

Your proposal doesn't even meet this bar. But more importantly, your
argument that it's too burdensome to implement doesn't carry much
weight when no actual implementors has expressed that concern.

[3] http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2013-October/041057.html

/ Jonas



More information about the whatwg mailing list