[whatwg] WebVTT feedback (was Re: Video feedback)
philipj at opera.com
Tue Jun 7 03:12:47 PDT 2011
On Sat, 04 Jun 2011 17:05:55 +0200, Silvia Pfeiffer
<silviapfeiffer1 at gmail.com> wrote:
>> On Mon, 3 Jan 2011, Philip J盲genstedt wrote:
Silvia, is your mail client a bit funny with character encodings? (The
UTF-8 representation of U+00E4 is the same as the GBK representation of
>>> > > * The "bad cue" handling is stricter than it should be. After
>>> > > collecting an id, the next line must be a timestamp line.
>>> > > we skip everything until a blank line, so in the following the
>>> > > parser would jump to "bad cue" on line "2" and skip the whole cue.
>>> > >
>>> > > 1
>>> > > 2
>>> > > 00:00:00.000 --> 00:00:01.000
>>> > > Bla
>>> > >
>>> > > This doesn't match what most existing SRT parsers do, as they
>>> > > look for timing lines and ignore everything else. If we really need
>>> > > to collect the id instead of ignoring it like everyone else, this
>>> > > should be more robust, so that a valid timing line always begins a
>>> > > new cue. Personally, I'd prefer if it is simply ignored and that we
>>> > > use some form of in-cue markup for styling hooks.
>>> > The IDs are useful for referencing cues from script, so I haven't
>>> > removed them. I've also left the parsing as is for when neither the
>>> > first nor second line is a timing line, since that gives us a lot of
>>> > headroom for future extensions (we can do anything so long as the
>>> > second line doesn't start with a timestamp and "-->" and another
>>> > timestamp).
>>> In the case of feeding future extensions to current parsers, it's way
>>> better fallback behavior to simply ignore the unrecognized second line
>>> than to discard the entire cue. The current behavior seems
>>> strict and makes the parser more complicated than it needs to be. My
>>> preference is just ignore anything preceding the timing line, but even
>>> if we must have IDs it can still be made simpler and more robust than
>>> what is currently spec'ed.
>> If we just ignore content until we hit a line that happens to look like
>> timing line, then we are much more constrained in what we can do in the
>> future. For example, we couldn't introduce a "comment block" syntax,
>> any comment containing a timing line wouldn't be ignored. On the other
>> hand if we keep the syntax as it is now, we can introduce a comment
>> just by having its first line include a "-->" but not have it match the
>> timestamp syntax, e.g. by having it be "--> COMMENT" or some such.
>> Looking at the parser more closely, I don't really see how doing
>> more complex than skipping the block entirely would be simpler than what
>> we have now, anyway.
> Yes, I think that can work. The pattern of a line with "-->" without
> time markers is currently ignored, so we can introduce something with
> it for special content like comments, style and default.
This seems to have been Ian's assumption, but it's not what the spec says.
Follow the steps in
32. If line contains the three-character substring "-->" (U+002D
HYPHEN-MINUS, U+002D HYPHEN-MINUS, U+003E GREATER-THAN SIGN), then jump to
the step labeled timings below.
40. Timings: Collect WebVTT cue timings and settings from line, using cue
for the results. If that fails, jump to the step labeled bad cue.
54. Bad cue: Discard cue.
(Followed by a loop to skip until the next empty line.)
The effect is that that any line containing "-->" that is not a timing
line causes everything up to the next newline to be ignored.
>>> * underline: EBU STL, CEA-608 and CEA-708 support underlining of
>> I've added support for 'text-decoration'.
> And for <u>. I am happy now, thanks. :-)
Huh. For those who are surprised, this was added in
http://html5.org/r/6004 at the same time as <u> was made conforming for
HTML. See http://www.w3.org/Bugs/Public/show_bug.cgi?id=10838
>>> * Voice synthesis of e.g. mixed English/French captions. Given that
>>> would only be useful to be people who know both languages, it seem not
>>> worth complicating the format for.
>> Agreed on all fronts.
> I disagree with the third case. Many people speak more than one
> language and even if they don't speak the language that is in use in a
> cue, it is still bad to render it in using the wrong language model,
> in particular if it is rendered by a screen reader. We really need a
> mechanism to attach a language marker to a cue segment.
It's not needed for the rendering of French vs English, is it? It is
theoretically useful for CJK, but as I've said before it seems to be more
common to transliterate the foreign script in these cases.
>>> Do you have any examples of real-world subtitles/captions that would
>>> benefit from more fine-grained language information?
>> This kind of information would indeed be useful.
> Note that I'm not so much worried about captions and subtitles here,
> but rather worried about audio descriptions as rendered from cue text
When would one want these descriptions to be multi-language?
More information about the whatwg