There's lots of fragmentation/consolidation going on around streams - WhatWG?, W3C? and MS. NOTE: This work is impacting or will impact many of the WGs?.

Part of the problem here my come from the fact the term stream is used in many different media and programming ideas.

For an overview of "all" the current flows from source to sink with a focus on Post-Processing see http://www.slideshare.net/robman/web-standards-for-ar-workshop-at-ismar13/14

It's probably useful to separate this groups discussion into Streams (flow) and the Post-Processing that's built on top of (or into) these Streams. With a focus on video/computer vision specially contrast the (currently very inefficient) Video/Canvas model with the slightly more efficient Video/Shader model.

Under the hood the browser (at least Chrome) currently uses GPU based opaque texture id's so references to textures can efficiently be passed around with minimal copies. If we could access this texture inside GLSL without it being painted or copied into the Web Platform (e.g. just inside the shader) then we could potentially take a big step forward. But as soon as the pixels are made available to the Web Platform then a massive performance hit occurs.

For much computer vision this is fine as all processing "could" happen in GLSL and only the resulting key features, or other meta data would be returned - not the actual pixels.

NOTE: Platforms with general hardware video decode will not be as efficient for post processing.

From a JS dev perspective the ability to connect ArrayBuffers? to both ends of many types of end points (XHR, WebSocket?, DataChannel?, etc) makes the management of data exchange much simpler and more elegant.

The spec from the whatwg is doing more than just media, and the Damon says the JS community likes it.

NOTE: the W3C? spec handles back pressure using the (JS-invisible) "write pending flag" and thrown errors to indicate congestion; the whatwg spec handles it using an explicit writeability flag.

Some comments from the group is that a great thing about whatwg streams is how generic it is. As usual collaboration between the two groups would be great.

Streams are obviously critical for media delivery and being able to do efficient post processing like shaped noise addition could be very useful for services like YouTube?.

There is current work underway (see Ian's initial proposal and the following threads) to allow workers to use/access canvas elements/contexts.

The current browser models and the painting flows mean that event loops are currently probably the best way to handle processing new frames. But there is an opportunity for a new sort of approach like a WebVideo? API (see earlier suggestion by Rachel Blum) that's a parallel peer to the WebAudio? API. And in just the way WebAudio? has optimised common processing into well defined nodes and then leaves open the ScriptProcessorNode? for deeper experiementation. This sort of model would work well for a WebVideo? approach and could be tied in with the opaque texture id's discussed above.

NOTE: There's nothing in browsers currently to decode media without rendering it. A WebVideo? API/node would be "completely" independent of rendering.

Native client extension may be a good route to prototype a web video node...however the browser are currently very fragmented for this type of development NaCL? vs Emscripten/asm.js.