[whatwg] Default encoding to UTF-8?

Henri Sivonen hsivonen at iki.fi
Tue Jan 3 00:33:02 PST 2012


On Thu, Dec 22, 2011 at 12:36 PM, Leif Halvard Silli <lhs at russisk.no> wrote:
>> It's unclear to me if you are talking about HTTP-level charset=UNICODE
>> or charset=UNICODE in a meta. Is content labeled with charset=UNICODE
>> BOMless?
>
> Charset=UNICODE in meta, as generated by MS tools (Office or IE, eg.)
> seems to usually be "BOM-full". But there are still enough occurrences
> of pages without BOM. I have found UTF-8 pages with the charset=unicode
> label in meta. But the few page I found contained either BOM or
> HTTP-level charset=utf-8. I have to little "research material" when it
> comes to UTF-8 pages with charset=unicode inside.

Making 'unicode' an alias of UTF-16 or UTF-16LE would be useless for
pages that have a BOM, because the BOM is already inspected before
<meta> and if HTTP-level charset is unrecognized, the BOM wins.

Making 'unicode' an alias of UTF-16 or UTF-16LE would be useful for
UTF-8-encoded pages that say charset=unicode in <meta> if alias
resolution happens before UTF-16 labels are mapped to UTF-8.

Making 'unicode' an alias for UTF-16 or UTF-16LE would be useless for
pages that are (BOMless) UTF-16LE and that have charset=unicode in
<meta>, because the <meta> prescan doesn't see UTF-16-encoded metas.
Furthermore, it doesn't make sense to make the <meta> prescan look for
UTF-16-encoded metas, because it would make sense to honor the value
only if it matched a flavor of UTF-16 appropriate for the pattern of
zero bytes in the file, so it would be more reliable and straight
forward to just analyze the pattern of zero bytes without bothering to
look for UTF-16-encoded <meta>s.

> When the detector says UTF-8 - that is step 7 of the sniffing algorith,
> no?
> http://dev.w3.org/html5/spec/parsing.html#determining-the-character-encoding

Yes.

>>  2) Start the parse assuming UTF-8 and reload as Windows-1252 if the
>> detector says non-UTF-8.
...
> I think you are mistaken there: If parsers perform UTF-8 detection,
> then unlabelled pages will be detected, and no reparsing will happen.
> Not even increase. You at least need to explain this negative spiral
> theory better before I buy it ... Step 7 will *not* lead to reparsing
> unless the default encoding is WINDOWS-1252. If the default encoding is
> UTF-8, then step 7, when it detects UTF-8, then it means that parsing
> can continue uninterrupted.

That would be what I labeled as option #2 above.

> What we will instead see is that those using legacy encodings must be
> more clever in labelling their pages, or else they won't be detected.

Many pages that use legacy encodings are legacy pages that aren't
actively maintained. Unmaintained pages aren't going to become more
clever about labeling.

> I am a bitt baffled here: It sounds like you say that there will be bad
> consequences if browsers becomes more reliable ...

Becoming more reliable can be bad if the reliability comes at the cost
of performance, which would be the case if the kind of heuristic
detector that e.g. Firefox has was turned on for all locales. (I don't
mean the performance impact of running a detector state machine. I
mean the performance impact of reloading the page or, alternatively,
the loss of incremental rendering.)

A solution that would border on reasonable would be decoding as
US-ASCII up to the first non-ASCII byte and then deciding between
UTF-8 and the locale-specific legacy encoding by examining the first
non-ASCII byte and up to 3 bytes after it to see if they form a valid
UTF-8 byte sequence. But trying to gain more statistical confidence
about UTF-8ness than that would be bad for performance (either due to
stalling stream processing or due to reloading).

> Apart from UTF-16, Chrome seems quite aggressive w.r.t. encoding
> detection. So it might still be an competitive advantage.

It would be interesting to know what exactly Chrome does. Maybe
someone who knows the code could enlighten us?

>>> * Let's say that I *kept* ISO-8859-1 as default encoding, but instead
>>> enabled the Universal detector. The frame then works.
>>> * But if I make the frame page very short, 10 * the letter "ø" as
>>> content, then the Universal detector fails - on a test on my own
>>> computer, it guess the page to be Cyrillic rather than Norwegian.
>>> * What's the problem? The Universal detector is too greedy - it tries
>>> to fix more problems than I have. I only want it to guess on "UTF-8".
>>> And if it doesn't detect UTF-8, then it should fall back to the locale
>>> default (including fall back to the encoding of the parent frame).
>>>
>>> Wouldn't that be an idea?
>>
>> No. The current configuration works for Norwegian users already. For
>> users from different silos, the ad might break, but ad breakage is
>> less bad than spreading heuristic detection to more locales.
>
> Here I must disagree: Less bad for whom?

For users performance-wise.

--
Henri Sivonen
hsivonen at iki.fi
http://hsivonen.iki.fi/


More information about the whatwg mailing list