[whatwg] <video> element proposal
maikmerten at gmx.net
Sun Mar 18 06:17:11 PDT 2007
Håkon Wium Lie schrieb:
> In the context of codecs, the term "performance" is most often used to
> describe compression ratios (at given levels of quality). There is
> another factor related to performace which is also relevant when
> picking the best codec for the web: how much processing power does a
> given format need? I believe, in general, that the higher "performace"
> a codec has, the more processing power it requires. As a result, it
> may be impossible to decode a "high-performance" video format on some
I'm not an expert in video coding, just an interested layman. So take
whatever I say with a grain of salt. I hope that if I happen to spread
wrong information someone will step up and correct things.
As a rule of thumb powerful codecs usually need more processing power
than not so powerful codecs. From MPEG 1 to MPEG 4 Part 10 (H.264)
there's a steady increase in processing complexity.
In case of Theora the decoding requirements are in the same league as
MPEG 4 Part 2 (the original MPEG4 video codec, most widely known by XviD
or DivX), perhaps a bit lighter - and the format itself is targeted at
achieving similar compression results.
H.264 is "much more complex". I have no concrete numbers (those would
depend on the exact circumstances anyway) but many devices that cope
with MPEG4 Part 2 are vastly out of luck with H.264.
Mobile devices playing H.264 usually have DSPs on-board to help with the
decoding tasks - and even then they still don't cope with the more
complex profiles H.264 has to offer. The iPod Video only supports the
Baseline Profile according to http://www.apple.com/ipod/specs.html ,
which is the profile of lowest complexity available. According to
http://electronics.howstuffworks.com/ipod3.htm their iPod used special
video processing hardware (
) to play video.
I'd think browser vendors usually can't rely on DSPs when it comes to
I only know some data points for Theora, but I think MPEG4 Part 2 should
On the Nokia N800 internet tablet Theora at 352x208 resolution decodes
with ~45% cpu usage using the feature complete "theora-exp" decoder
which will become the new reference decoder. The Nokia N800 seems to use
an OMAP2420 microprocessor. It includes an ARM core and DSP units for
multimedia processing. The decoder, however, is written in plain C
without optimized code for ARM and with no support for the DSP features
(At that time it seems the integer-only Vorbis decoder (Tremor) wasn't
yet working on the N800, so they had to use the floating point decoder
not really suitable for that hardware platform. These problems are
resolved by now if I understood correctly.)
Another data point is the OLPC "100$ Notebook" hardware. That one is
using an "AMD Geode GX-500 at 1.0W", which is basically a Cyrix MediaGX
(later renamed to National Semiconductor Geode, then sold to AMD) at 366
MHz. It's using a x86 compatible core roughly being Pentium-I class when
it comes to integer performance and supporting MMX instructions.
Actually I think there may be more powerful cell phones out there when
it comes to raw processing power ;)
The OLPC hardware is capable of decoding Theora in QCIF (352x288)
resolution in realtime (that's mostly a worst case estimate for content
with much movement). For accessing video content on e.g. Wikipedia that
should suffice, though.
> Therefore, on the web, a "high-performance" format may not be suitable
> as it excludes devices with limited processing power. On the other
> hand, these devices may also have limited connectivity so compression
> is called for.
It's more or a less a tradeoff situation. If you double the processing
requirements you usually don't come even close to doubling the coding
efficiency. If you can't use special hardware you often end up not
having a choice but to choose a codec of moderate complexity.
More information about the whatwg