top | item 3563237

WebM and WebP Hand Ported to JavaScript for All Browsers

70 points| devongovett | 14 years ago |badassjs.com | reply

19 comments

order
[+] jonny_eh|14 years ago|reply
That is really awesome.

I wonder if the file size gains of switching to WebP is worth the hit of including the WebP.js file. Of course you'd now require clients to support JS but in this day and age that's reasonable anyways.

[+] bri3d|14 years ago|reply
I'm not sure this is a practical approach for "real" video playback.

Letting the browser abstract away video playback, doing whatever's optimal on the target hardware (hardware acceleration, optimized SSE/NEON vector instructions, and so on) will always be better than decoding in software, and the functionality already exists for us via the <video> tag - decoding via JavaScript is in many ways a big step backwards.

I understand that a JavaScript VP8 decoder provides greater cross-browser compatibility, but personally I think it's worth the overhead/cost of encoding and storing both H.264 and VP8/WebM versions of videos in order to win sane mobile playback and reduced CPU usage on the desktop (via hardware H.264 acceleration).

Plus, this decoder doesn't consider audio, as far as I can tell. HTML5 provides very bad support for programmatically-generated audio with strict time constraints - this is a common problem with writing HTML5-based games as well and I can't see any way that any browser today could produce adequately synchronized, real-time audio along with video rendered to a canvas.

This is an absolutely awesome piece of code, and the hacker in me loves it, but I don't think it's practical in real life. It might have limited use in audio-less product demos, page banners, and the like, but for legitimate video playback, I think the <video> tag combined with a flash fallback is vastly superior, even when the need to encode the desired video to both H.264 and WebM is taken into account.

[+] stock_toaster|14 years ago|reply
I would be more concerned about the memory consumption (especially on mobile).

On ios in particular I recall[1] something about jpg taking up significantly more than their file size in memory, due to decoding the image. png's don't suffer from that (an ios optimization for pngs or something), but they tend to be overall larger for big files (retina size).

I would think webp.js would be even worse off than jpg in regard to memory usage for mobile devices. (note: this is just a guess/hunch though. I saw no data one way or the other on the linked post, nor on the libwebp site).

[1]: Maybe a fellow HN'er knows more about this particularity with regard to ios image memory consumption.

edit: found this: http://cubiq.org/testing-memory-usage-on-mobile-safari which appears to rebut the jpg/png memory discrepancy. Perhaps our particular app was doing something odd with the jpg images we were testing, that was causing the additional memory usage. Maybe it had to do with loading in the actual jpg decoding code itself (png perhaps already loaded?).

edit2: this article seems to confirm it: http://mobile.tutsplus.com/tutorials/mobile-design-tutorials...

subtext: test test test! :P

[+] devongovett|14 years ago|reply
I believe the point of WebP is that it is smaller than JPEG and PNG files. And the JS file is only 27 KB minified and gzipped...
[+] runn1ng|14 years ago|reply
Do I understand it correctly that the whole codec runs in javascript?
[+] fasteddie31003|14 years ago|reply
I have mixed feelings about handling web media like this. It's probably very well-done, interesting javascript, but I don't know if this is a long-term solution for the web. I feel like high-performance decoding should be done in the browser and is not Javascript's domain.
[+] justincormack|14 years ago|reply
But many people want to use free video formats on the web and Apple and Microsoft are still refusing to ship them.
[+] JoshTriplett|14 years ago|reply
This seems like the right approach going forward, as long as JavaScript engines continue to approach the performance of native code. This approach keeps codec implementations in the JavaScript sandbox where they can't provide new exploits, rather than extending the attack surface with large, frequently exploited codecs.

In general, JavaScript seems like the right approach for anything that doesn't require special privileges to do, and for that the APIs should focus on creating the minimum possible interface that lets JavaScript do all the interesting work.

[+] modeless|14 years ago|reply
I'd like to agree, but this makes a good benchmark, and right now it's showing that JavaScript is about 20 times slower than native code on my machine in both Chrome and Firefox. That's completely unacceptable for video decoding. In fact, even if JavaScript was just as fast as native code it would still be unacceptable for mobile video, where hardware acceleration is required for acceptable performance and battery life.
[+] est|14 years ago|reply
[+] ZeroGravitas|14 years ago|reply
WebP is, by default, lossy whereas ImageZero is lossless so they're not directly comparable. WebP does have an in-progress lossless mode though, known as WebPll.

WebPll compresses better, but takes much longer to do it, therefore is better suited to website images which are encoded once and transmitted multiple times.

ImageZero compresses fast, but not as well, therefore is better suited to saving images you are working on in a image editor to disk.