<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
</head>
<body bgcolor="#ffffff" text="#000000">
A few questions and thoughts on the WebWorkers proposal:<br>
<br>
If a WebWorker object is assigned to local variable inside a complex
script then it cannot be seen or stopped by the calling page. Should
the specification offer document.workers or getAllWorkers() as a means
to iterate over all workers regardless of where they were created?<br>
<br>
Is it wise to give a web application more processing power
than a single CPU core (or HT thread) can provide? What stops a web
page hogging ALL
cores (deliberately or not) and leaving no resources for the UI mouse
or key actions required to close the page? (This is not a contrived
example, I have seen both Internet Explorer on Win32 and Flash on Linux
consume 100% CPU on several occasions). I know it's a "vendor issue"
but should the spec at least recommend UAs leave the last CPU/core free
for OS tasks?<br>
<br>
Can anybody point me to an existing Javascript-based web service that
needs more client processing power than a single P4 core?<br>
<br>
Shouldn't an application that requires so
much grunt really be written in Java or C as an applet, plug-in or
standalone
application? <br>
<br>
If an application did require that much computation isn't it also
likely to need a more efficient inter-"thread" messaging protocol than
passing Unicode strings through MessagePorts? At the very least
wouldn't it usually require the passing of binary data, complex objects
or arrays between workers without the additional overhead of a string
encode/decode?<br>
<br>
Is the resistance to adding threads to Javascript an issue with the
language itself, or a matter of current interpreters being
non-threadsafe?<br>
<br>
The draft spec says "protected" workers are allowed to live for a
"user-agent-defined amount of time" after a page or browser is closed.
I'm not really sure what possible value this could have since as an
author we won't know whether the UA allows _any_ time and if so whether
that time will be enough to complete our cleanup (given a vast
discrepancy in operations-per-second across UAs and client PCs). If our
cleanup can be arbitrarily cancelled then isn't it likely that we might
actually leave the client or server in a worse state than if we hadn't
tried at all? Won't this cause difficult-to-trace sporadic bugs caused
by browser differences in what could be a rare event (a close during
operation Y instead of during X)?<br>
<br>
I just don't see any common cases where you'd _need_
multiple OS threads but still be willing to accept Javascripts' poor
performance, Webworkers limited API, and MessagePorts' limited IO. The
only things I can think of are new user annoyances (like delaying
browser shutdown and hogging the CPU). Sure UA's might let us disable
these things but then some pages won't work. <a
href="http://stuff.gsnedders.com/spec-gen/webworkers.html">The Working
Draft</a>
lists a few examples, most of which appear to use non-blocking network
IO and callbacks anyway. Other examples rely on the ability for workers
to outlive the lifetime of the calling page (which is pretty
contentious). The one remaining example is a contrived mathematical
exercise. Is the scientific world really crying out for complex
theorems to be solved in web browsers? What real-world use cases is
WebWorkers supposed to solve?<br>
<br>
I would like to see WebWorkers happen but as an author and a user I
have serious concerns about using it in its current form. Is it really
worth implementing or should more attention be paid to fixing
non-thread-safe practices in the specification so future UAs can better
manage threading internally (ie: video, IO, sockets, JS all running on
seperate threads or even sets of threads per open tab/window)? <br>
<br>
Shannon<br>
</body>
</html>