top | item 14501154

(no title)

saamyjoon | 8 years ago

It's implementation dependent. Most implementations JIT Wasm up front (I think Chakra is the only engine that interprets). So, JS loads faster because almost all engines interpret first (all?).

discuss

order

luke_wagner|8 years ago

I think the primarily-AOT compilation strategy we see today is a consequence of many of the initial workloads being frame-based animation where AOT avoids animation stutters. But this is likely to evolve over time as wasm workloads evolve. If and when we see wasm showing up in frameworks in contexts where page-load is the primary concern I can see a more lazy JIT strategy being the right thing and then we'd want to specify some sort of developer knob to control this. But given a pure-JIT approach like Chakra is using, wasm should be able to load code byte-for-byte faster than JS.

Someone|8 years ago

WebKit already has a more lazy JIT strategy. From the article being discussed:

"WebKit’s WebAssembly implementation, like our JavaScript implementation, uses a tiering system to balance startup costs with throughput. Currently, there are two tiers to the engine: the Build Bytecode Quickly (BBQ) tier and the Optimized Machine-code Generator (OMG) tier. Both rely on the B3 JIT as their low-level optimizer."

saamyjoon|8 years ago

I agree.

Chakra uses a pure JIT approach? I thought they interpret both in JS and Wasm?

TazeTSchnitzel|8 years ago

Is “interpreting” necessarily faster?

Consider also this is perhaps a false distinction. V8 always produces machine code when running JS IIRC. And that code is JITed doesn't mean the entire module is.

saamyjoon|8 years ago

V8 used to do this. Now they have a JS interpreter called Ignition.

bzbarsky|8 years ago

> V8 always produces machine code when running JS IIRC.

They used to. They stopped doing that because it made initial load too slow...