(no title)
xipix | 1 month ago
One such example: audio time stretch in the browser based upon a C++ library [1]. There is no way that if this were implemented in JS that it could deliver (a) similar performance or (b) source code portability to native apps.
[1] https://bungee.parabolaresearch.com/change-audio-speed-pitch
coldtea|1 month ago
"Not yet"? It will never reach "bare-metal levels of performance and energy efficiency".
flohofwoe|1 month ago
https://floooh.github.io/tiny8bit/
You can squeeze out a bit more by building with -march=native, but then there's no reason that a WASM engine couldn't do the same.
kannanvijayan|1 month ago
The initial order-of-magnitude jump in perf that JITs provided took us from the 5-2x overhead for managed runtimes down to some (1 + delta)x. That was driven by runtime type inference combined with a type-aware JIT compiler.
I expect that there's another significant, but smaller perf jump that we haven't really plumbed out - mostly to be gained from dynamic _value_ inference that's sensitive to _transient_ meta-stability in values flowing through the program.
Basically you can gather actual values flowing through code at runtime, look for patterns, and then inline / type-specialize those by deriving runtime types that are _tighter_ than the annotated types.
I think there's a reasonable amount of juice left in combining those techniques with partial specialization and JIT compilation, and that should get us over the hump from "slightly slower than native" to "slightly faster than native".
I get it's an outlier viewpoint though. Whenever I hear "managed jitcode will never be as fast as native", I interpret that as a friendly bet :)
creata|1 month ago
pjmlp|1 month ago