top | item 33773702

(no title)

wwwigham | 3 years ago

V8 stopped dropping free perf wins on the JS community every year a while ago, and people are clearly starting to notice and feel like they need to take matters into their own hands. For some, that means an adventurous rewrite of their library with nebulous end results (I hesitate to call the proposal "eslint" still, since it seems to aspire to be an omni-linter). For others, it may mean abandoning JS entirely (see rome). And yet for some others, it has made them realize, just like with the other interpreted languages, the perf probably stopped being important after some unnoticed breakpoint.

To be real, I don't think they'd get any perf gains from a wasm component - not until most of the library is wasm, anyway. The dream of offloading a small hot function to a faster, instruction-optimized wasm implementation is usually killed by marshalling costs for anything other than numbers. Even the perf of simple string functions (like `normalizePath`) often get destroyed in practice by things like unicode validation, since the string models and memory layouts between JS and Rust don't match. Still, despite all that, it's tempting to _try_ because, well, what else are you going to look into for performance gains? Once you've tracked down all your v8 deopts and monomorphized all your hot code, what's left but telling v8 to get out of the way as much as possible, while you go as low level as you can? Algorithms improvements? Those often amount to random flashes of insight - you can't plan those. So when a lot of people say "Hey, I reimplemented 5% of X in Rust and it's 20x faster!" you _could_ smugly assume it's because it's a partial implementation, or you can choose to believe that the authors don't have ulterior motives and the tech stack itself has merit and investigate it yourself. It'll be easier and more fun than thinking hard about usage patterns and algorithms all day, anyway.

discuss

order

No comments yet.