[whatwg] onclose events for MessagePort
Ian Hickson
ian at hixie.ch
Wed Dec 11 15:54:59 PST 2013
On Wed, 11 Dec 2013, Jonas Sicking wrote:
>
> The proposal at [1] would indeed cause pages that hold a MessagePort
> to not be bfcached if the other side "pin" its port.
This seems to me to require sync IPC during navigation, which is a
non-starter.
> The proposal at [1] does not prevent bfcaching *anytime* that
> MessagePorts are alive in a page that is navigated away from. We can
> always bfcache the page if we don't know of any "other sides" having
> pinned their port. If we later detect that a port is being pinned, we
> can indicate an error to the pinning side, and throw the bfcached page
> out of the bfcache at that time.
It might be too late by then, since you might have already navigated back
to the bfcached page. This is why the IPC has to be synchronous, at the
latest during the back() navigation. (Not literally synchronous, I guess,
since otherwise you'd have trouble handling a mirror image scenario where
both sides were playing this game, but at least it needs to block the
history traversal back to a document that is bfcached but for which we may
not yet have received confirmation that the bfcache is clean.)
> If we want to worry about improving bfcaching beyond this proposal, I
> think there are ways to do that. But I think that is orthogonal. For
> example we've discussed adding an API which allows a page to say "I'm
> aware that there are plugins on this page and that that would normally
> prevent me from being bfcached. However I'm fine with being put in the
> bfcache and those plugins being stopped when that happens, I can deal".
> We could similarly add other APIs which indicates that the page is able
> to handle recovering from actions that happen when a page is being put
> in the bfcache, such as aborted network requests or aborted message
> channels. But I don't think any of that work should block us here. And
> i'm saying that as one of the implementors of the only bfcaching browser
> as far as I know.
I agree that that is orthogonal.
> Are there any UAs that have indicated that they feel stable enough that
> they don't worry about crashes?
I don't know if any vendor would admit that on the record, but consider
that we have no other features in the platform for dealing with crashes at
all, and that this first such feature only actually came up in the context
of an OOM killer, not a logic crash.
> And note that navigations can't be handled by simply sending a message
> in the onunload handler. That strategy doesn't work in dedicated
> workers. And even if we add onunload to workers that doesn't help
> long-running worker scripts.
I don't understand the issue with dedicated workers. Can you elaborate?
> Finally, has any implementation indicated that the proposal in [1] is
> too burdensome to implement? If they have that of course changes the
> discussion dramatically here. But I'm reluctant to guess about what
> implementations will or won't implement. I'd rather ask them.
>
> [1] http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2013-October/041250.html
I strive for a higher bar than just "no objections". Certainly though I
welcome further input from everyone else here.
I really don't understand why we'd want that level of complexity for
something that we can solve with just a single event handler, though. The
problem, as described, is just "we need to know when a remote port is
killed due to an OOM error". The event handler does this. Why do we need
to complicate it with two-way metadata messages to pin workers and so on?
I really don't understand what that gets us.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
More information about the whatwg
mailing list