[whatwg] Storage mutex

Robert O'Callahan robert at ocallahan.org
Sun Aug 23 23:33:35 PDT 2009

On Sat, Aug 22, 2009 at 10:22 PM, Jeremy Orlow <jorlow at chromium.org> wrote:

> On Sat, Aug 22, 2009 at 5:54 AM, Robert O'Callahan <robert at ocallahan.org>wrote:
>> On Wed, Aug 19, 2009 at 11:26 AM, Jeremy Orlow <jorlow at chromium.org>wrote:
>>> First of all, I was wondering why all user prompts are specified as "must
>>> release the storage mutex" (
>>> http://dev.w3.org/html5/spec/Overview.html#user-prompts).  Should this
>>> really say "must" instead of "may"?  IIRC (I couldn't find the original
>>> thread, unfortunately) this was added because of deadlock concerns.  It
>>> seems like there might be some UA implementation specific ways this could
>>> deadlock and there is the question of whether we'd want an alert() while
>>> holding the lock to block other execution requiring the lock, but I don't
>>> see why the language should be "must".  For Chromium, I don't think we'll
>>> need to release the lock for any of these, unless there's some
>>> deadlock scenario I'm missing here.
>> So if one page grabs the lock and then does an alert(), and another page
>> in the same domain tries to get the lock, you're going to let the latter
>> page hang until the user dismisses the alert in the first page?
> Yes.  And I agree this is sub-optimal, but shouldn't it be left up to the
> UAs what to do?  I feel like this is somewhat of an odd case to begin with
> since alerts lock up most (all?) browsers to a varying degrees even without
> using localStorage.

That behaviour sounds worse than what Firefox currently does, where an alert
disables input to all tabs in the window (which is already pretty bad),
because it willl make applications in visually unrelated tabs and windows

>  Given that different UAs are probably going to have other scenarios where
>>> they have to drop the lock (some of them may even be purely implementational
>>> issues), should we add some way for us to notify scripts the lock was
>>> dropped?  A normal event isn't going to be of much use, since it'll fire
>>> after the scripts execution ends (so the lock would have been dropped by
>>> then anyway).  A boolean doesn't seem super useful, but it's better than
>>> nothing and could help debugging.  Maybe fire an exception?  Are there other
>>> options?
>> A generation counter might be useful.
> Ooo, I like that idea.  When would the counter increment?  It'd be nice if
> it didn't increment if the page did something synchronous but no one else
> took the lock in the mean time.

Defining "no-one else" may be difficult. I haven't thought this through, to
be honest, but I think you could update the counter every time the storage
mutex is released and the shared state was modified since the storage mutex
was acquired. Reading the counter would acquire the storage mutex. You'd
basically write

var counter = window.storageMutexGenerationCounter;
... do lots of stuff ...
if (window.storageMutexGenerationCounter != counter) {
  // abort, or refresh local state, or something

I'm not sure what you'd do if you discovered an undesired lock-drop, though.
If you can't write something sensible instead of "abort, or something", it's
not worth doing.

 But getStorageUpdates is still not the right name for it.  The only way
> that there'd be any updates to get is if, when you call the function,
> someone else takes the lock and makes some updates.  Maybe it should be
> yieldStorage (or yieldStorageMutex)?  In other words, maybe the name should
> imply that you're allowing concurrent updates to happen?

I thought that's what getStorageUpdates implied :-).

"He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all." [Isaiah
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090823/b905c52e/attachment-0002.htm>

More information about the whatwg mailing list