top | item 15713699

(no title)

eugeneionesco | 8 years ago

>The webassembly exploit part of the chain bums me out (I was always afraid of stuff like this when I was working on the design for it) but it's pretty uninteresting, really. The simple sort of bug you get when you insist on writing stuff in C++.

I really hope people don't think webassembly is the fault for this, this vulnerability is no different from any other memory corruption vulnerability you would find in the js interpreter or the css parser or whatever.

discuss

order

standupstandup|8 years ago

Well, WebAssembly's primary near term contribution will be introducing the world of C++ exploits to web apps, which are already groaning under the load of XSS, XSRF, path traversal, SSRF etc attacks. Adding double frees, use after frees and buffer overflows on top doesn't seem ideal.

As for the rest, well, it'd be nice if there was any sort of plan to make Blink safer. I know about oilpan but what Mozilla is doing with Rust is impressive. The JVM guys are working on rewriting the JVM in Java. What's Blink's plan to make its own code safer? Sandboxing alone?

steveklabnik|8 years ago

What exploits are you specifically worried about with wasm?

The ones you call out in this post don’t have the same impact as native, even when it’s C or C++ compiled to wasm.

kodablah|8 years ago

> I really hope people don't think webassembly is the fault for this

Nah, I think it's pretty clear GP meant "when you insist on writing interpreters/compilers in C++" not that C++ was compiled into wasm.

kevingadd|8 years ago

Yeah, sorry for being unclear - that is what I meant. I don't see wasm as at fault here, it's just a bummer that this new attack surface was introduced by writing the wasm implementation in C++ instead of memory-safe languages. It's not something so complex that it really needs to be C++.

Most (all?) browser wasm backends function by just generating the internal IR used by the existing JS runtime, so it's not especially necessary to write the loader/generator in C++. The generated native module(s) are often cached, also, which diminishes the importance of making the generator fast at the cost of safety.

I wrote all the original encoder and decoder prototypes in JS for this reason - you can make it fast enough, and the browser already has a high-performance environment in which you can run that decoder. When the result is already being cached I think writing this in C++ is a dangerous premature optimization.

Similarly it's common to write decoders as a bunch of switch statements and bitpacking, which creates a lot of duplication and space for bugs to hide. You can build these things generally out of a smaller set of robust primitives to limit attack surface, but that wasn't done here either, despite my best efforts.