On Sat, Sep 11, 2010 at 2:20 PM, Robert O'Callahan <span dir="ltr"><<a href="mailto:robert@ocallahan.org">robert@ocallahan.org</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">On Sat, Sep 11, 2010 at 11:03 AM, Tab Atkins Jr. <span dir="ltr"><<a href="mailto:jackalmage@gmail.com" target="_blank">jackalmage@gmail.com</a>></span> wrote:<br></div><div class="gmail_quote"><div class="im">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>On Fri, Sep 10, 2010 at 4:01 PM, Robert O'Callahan <<a href="mailto:robert@ocallahan.org" target="_blank">robert@ocallahan.org</a>> wrote:<br>
> I think an ideal API for video frame processing would involve handing video<br>
> frames to a Worker for processing.<br>
<br>
</div>Mm, yeah, probably. But then you'd need to be able to do canvas on<br>
workers, and hand the data back... This is a complex problem.<br></blockquote></div><div><br>Most of the usecases I've seen just do get/putImageData, so it might make sense to just provide raw frame data to the Worker and not introduce a canvas dependency.<br>
</div></div></blockquote><div><br>Yes, I think that makes sense, though I would not restrict it to image data, but include audio data (when we have the API for it). Dragging image data through a canvas just to get to the pixels is actually really annoying. If we could set a newFrame event on a video for a worker and the event data contains the video image with associated audio information, that would be the best.<br>
<br><br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="gmail_quote"><div>
<br></div><div class="im"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204, 204, 204);padding-left:1ex">
So... no newframe event for now, leave timeupdate as it is, and fix<br>
this in the future?<br></blockquote></div><div><br>I think so. Another factor is that a lot of the video effects people have been using canvas for can actually be done with SVG filters, which can be GPU-accelerated and are compatible with asynchronous compositing. So it might be wise to focus on use-cases for video processing that aren't amenable to SVG filters (or extensions thereof), and understand what their requirements are.<br>
</div></div></blockquote><div><br>Things like object segmentation, face recognition, object tracking in video, or anything with frequency analysis in audio come to mind. Workers seem like heaven-made for these anyway, though right now with the canvas indirection it isn't really optimal.<br>
<br>Cheers,<br>Silvia.<br><br></div></div>