[whatwg] Installable web apps

Aaron Boodman (劉) aa at google.com
Fri Jun 4 11:25:18 PDT 2010

On Fri, Jun 4, 2010 at 4:58 AM, Henri Sivonen <hsivonen at iki.fi> wrote:
> On May 26, 2010, at 20:10, Aaron Boodman wrote:
>> This isn't really the point of this mail, but I just want to point out
>> that there are more differences between wgt and crx than the format of
>> the manifest file. The most important is that the identify of a crx
>> file is a public key, and all crx files are self-signed by their key.
>> This makes a crx file's identity unforgeable.
> .wgt supports signing, too, but as with Sun .jar or Mozilla .xpi the signing proves that the .wgt came from the entity that the private key belongs to as certified by PKI--as opposed to proving without PKI that it came from the same source as the previous version of the .wgt.
> After googling around a bit, I was unable to find a signed .crx file for analysis. (I took apart 3 .crx files and gave up.) Is the signing mechanism documented somewhere? .wgt reinvents the .jar signing wheel by the basic idea of .jar signing with XML Signatures.
> (Note that I am not in any way implying that PKI were better. If Google can actually get extension authors to sign their extensions on average and proving that extension updates came from the same source as the previous version, that's a pretty big win over the Firefox extension signing situation. In principle, Firefox extensions can be signed to the stronger level of proving who signed them as opposed to proving just "same as before", but in practice, virtually no one--not even Mozilla Labs--signs Firefox extensions, so it doesn't help much that the level of proof would be stronger if signed.)

Every crx file is signed. The signature and public key are part of the
zip file itself, just after the signature. The zip format allows extra
data there. When you took apart those crx files, if you used 'unzip'
from the command line, you may have seen 'Ignoring xx bytes of extra
data...'. That was the public key and signature being discarded :).

For the most part developers don't know or care about the details of
how signing works. We tried really hard to make it something that just
happens as part of developing your extension.

For users who host on our gallery, it is totally hidden. They just
upload zip files and the the gallery manages the key.

For users who self-host, they will be aware that there is something
called a "key file" that contains gibberish, which they need to keep
safe and need to use each time they package their extension.

You can get a feel for what this is like from a developer's point of view here:


If you're interested in the actual format, it has been a longstanding
todo to document it precisely :-/ But it is basically: <zip
header><crx version><key><signature><zip data>. Somebody created a
tiny ruby script that does it, here:


>>> I think it follows that to install a Web app, you navigate to its URL and bookmark it. There is no need to have an icon in a zip file for this: HTML5 already provides <link rel=icon sizes=... that the app can use to declare its icon, which can be pinned to cache upon bookmarking. So far, nothing new to design.
>>> A plain bookmark doesn't elevate the bookmarked app sufficiently to be special in the system app switcher (like Prism) or inside the tab system of the browser (like Firefox 4 application tabs). A plain bookmark also doesn't pre-grant any permissions or ensure that the app stays in the cache.
>>> I think the Webby step to take from here is to introduce the concept of application bookmarks (still without zip files). To "install" a Web application, the user would navigate to the app's URL and create an application bookmark.
>> For Chrome this isn't the UX we want. We want users to click a link in
>> the content area and be presented with an install dialog. We think
>> that going to something in the browser to "applicationify" a web app
>> is too indirect and that many users will not get it.
>> That said, I think there is room to support multiple models of
>> installation (or bookmarking, or whatever you want to call it),
>> though.
> Why don't you want the UX of the applicatinification process starting from browser chrome? Screen real estate reasons? Expectation or research showing that users don't understand the difference anyway?

Our experience is that:

a) Users typically don't see this kind of UI in the browser chrome.
The easy example is the feed badge in the URL bar, which fails with
normal people. To make this more discoverable, browsers ended up
supporting navigating directly to a feed file to initiate

b) Because users miss the browser UI, developers end up wanting to
help and end up resorting to terrible things like putting a big arrow
in their content that says "click over there >>".

> Don't you need an annoying "This site is trying to applicationify itself. Allow or Deny?" piece of UI if the site can start the process?

Yes, and we need to only honor the script call if it comes from a user
gesture. But we still think on the whole this is a better tradeoff.

But like I said, I think there's room for both kinds of UX. And in
fact, Chrome may some day support both kinds. But to start we are most
interested in content-initiated.

>> - Application name (you didn't mention this, but I think it is nice to
>> have distinct from the <title>, which is often overloaded with status
>> information)
> As Lachy mentioned, there's already in-document syntax for this:
> http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#meta-application-name
> I'm not aware of anyone implementing it, though.

Actually Chrome implements it for our application shortcuts :-/. Left
hand, meet right hand.

>> Another one that we would like in Chrome is a path prefix for the app.
>> This is to handle the case of applications like Google Reader which
>> are on http://www.google.com/reader.
> How does giving permissions to a prefixed application interact with the origin-based security model?

Poorly. Origin is baked so low into the platform, it is the true
boundary, and that can probably never be changed. In Chrome, we can
get close because we can enforce process isolation based on the path,
but it is an ongoing project to make that bulletproof that might never

However, I think it is still useful to know the paths an application
uses. One example of this is storage.

It would be really nice to be able to show users storage used by
applications ("Google Docs", "Google Reader"). But this isn't possible
since many apps are frequently served from the same origin. The best
we can currently do is "www.google.com".

If we add paths to the mix, we can do this. Applications on the same
origin can circumvent it if they want, but why would they? SOP already
guarantees that apps on the same origin are friendly and cooperate
with each other. That doesn't mean it isn't useful for the UA to know
which one is which.

>> Every URL in an application will have to <link> to this same
>> information. It seems like it would be better to just <link> to a
>> separate resource that contains the information.
>> To me, this all leads to the following proposal:
>> <html>
>> <head>
>> <!-- for UAs that want a button in the browser chrome to appify -->
>> <link rel="application-description" href="myapp.json">
>> </head>
>> <body>
>> <!-- for UAs that want a button in the Chrome to appify -->
>> <button onclick="navigator.installApplication()">install</button>
>> </body>
>> </html>
>> // myapp.json
>> {
>>  "name": "My Application",
>>  "icons": ...,
>>  "urls": [
>>    "reader/"
>>  ],
>>  "permissions": [
>>    ...
>>  ]
>> }
>> WDYT?
> Makes sense to me (if you add sniffing for navigator.installApplication before showing a button that calls it), although I doubt you'll be able to avoid repeating a <link> to the icon anyway.

As I said in my follow up email to this one, I do agree it is a shame
to not use the platform features that have already been added that
start to meet these needs. I think it is worth starting by trying to
just reuse those.

Thanks for the feedback,

- a

More information about the whatwg mailing list