We use WASM quite a bit for embedding a ton of Rust code with very company specific domain code into our web frontend. Pretty cool, because now your backend and frontend can share all kinds of logic without endless network calls.
But it’s safe to say that the interaction layer between the two is extremely painful. We have nicely modeled type-safe code in both the Rust and TypeScript world and an extremely janky layer in between. You need a lot of inherently slow and unsafe glue code to make anything work. Part is WASM related, part of it wasm-bindgen. What were they thinking?
I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often. That it fits the purpose more of heaving longer running compute in the background and bring over some chunk of data in the end. Why create a generic bytecode execution platform and limit the use case so much? Not everyone is building an in-browser crypto miner.
My reading of it is that the people furthering WASM aren't really associated with just browsers anymore and they are building a whole new VM ecosystem that the browser people aren't interested in. This is just my take since I am not internal to those organizations. But you have the whole web assembly component model and browsers just do not seem interested in picking that up at all.
So on the one side you have organizations that definitely don't want to easily give network/filesystem/etc. access to code and on the other side you have people wanting it to be easier to get this access. The browser is the main driving force for WASM, as I see it, because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container. So WASM doesn't really have much impetus to improve beyond compute.
WASM as it is, is good enough for non-trivial graphics and geometry workloads - visibility culling (given octree/frustum), data de-serialization (pointclouds, meshes), and actual BREP modeling. All of these a) are non-trivial to implement b) would be a pain to rewrite and maintain c) run pretty swell in the wasm.
I agree WASM has it’s drawbacks but the execution model is mostly fine for these types of task where you offload the task to a worker and are fine waiting a millisecond or two for the response.
The main benefit for complex tasks like above is that when a product needs to support isomorphic web and native experience - quite many use cases actually in CAD, graphics & gis) - based on complex computation you maintain, the implementation and maintenance load drops to a half. Ie these _could_ be eg typescript but then maintaining feature parity becomes _much_ more burdensome.
> I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often.
It's fine and fast enough as long as you don't need to pass complex data types back and forth. For instance WebGL and WebGPU WASM applications may call into JS thousands of times per frame. The actual WASM-to-JS call overhead itself is negligible (in any case, much less than the time spent inside the native WebGL or WebGPU implementation), but you really need to restrict yourself to directly passing integers and floats for 'high frequency calls'.
Those problems are quite similar to any FFI scenario though (e.g. calling from any high level language into restricted C APIs).
The entire DOM API is very coupled to JS, it's all designed with JS in mind, any new and future proposed changes are thought about solely through the lens of JS.
If they introduced a WASM API it would perpetually be a few months/years behind the JS one, any new features would have to be implemented in both etc.
I can see why it's not happened
(edit) And yes, I think the intention of WASM was either heavy processing, or UI elements more along the lines of what used to be done with Java applets etc. potentially using canvas and bypassing the DOM entirely, not as an alternative to JS for doing `document.createElement`
The confusion is perhaps due to your usage focus and the security constraints browser compiler makers face to make something secure.
First off, remember that initially all we had was JS, then Asm.JS was forced down Apple throats by being "just" a JS compatible performance hack (remember that Google had tried to introduce NaCl beforehand but never got traction). You can still see the Asm.JS lineage in how Wasm branching opcodes work (you can always easily decompose them into while loops together with break and continue instructions).
The target market for NaCl, Asm.JS and Wasm seems to have been focused on enabling porting C/C++ games even if other usages was always of interest, so while interop times can be painful it's usually not a major factor.
Secondly, As a compiler maker (and to look at performance profiles), I usually place languages into 3 categories.
Category 1: Plain-memory-accessors, objects are usually a pointer number + offsets for members, more or less manually managed memory. Cache friendlyness is your own worry, CPU instructions are always simple.
C, C++, Rust, Zig, Wasm/Asm.JS, etc goes here.
Category 2: GC'd offset-languagses, while we still have pointers(now called references) they're usually restricted from being directly mutated, instead going through specialized access instructions, however as with category 1 the actual value can often be accessed with the pointer+offset and object layouts are _fixed_ so less freedom vs JS but higher perf.
Also there can often be GC-specific instructions like read/write-barriers associated with object accesses. Performance for actual instructions is still usually good but GC's can affect access patterns to increase costs and some GC collection unpredictability.
Java, C#, Lisps, high perf functional languages,etc usually belong here (with exceptions).
Category 3: GC'd free-prop languages, objects are no longer of fixed size (you can add properties after creation), runtimes like V8 tries their best to optimize this away to approach Category 2 languages but abuse things enough and you'll run out a performance cliff. Every runtime optimization requires _very careful_ design of fallbacks that can affect practically almost any other part of the runtime (these manifest as type-confusion vulnerabilities if you look at bug-reports) as well as how native-bindings are handled.
JS, Python, Lua, Ruby, etc goes here.
Naturally some languages/runtimes can straddle these lines (.NET/CIL has always been able to run C as well as later JS, Ruby and Python in addition to C# and today C# itself is gaining many category 1 features), I'm mostly putting the languages into the categories where the majority of user created code runs.
To get back to the "troubles" of Wasm<->JS, as you noticed they are of category 1 and 3, since Wasm is "wrapped" by JS you can usually reach into Wasm memory from JS since it's "just an buffer", the end-user security implications are fairly low since the JS has well defined bounds checking (outside of performance costs).
The other direction is a pure clusterf from a compiler writers point of view, remember that most of these optimizations of Cat 3 languages have security implications? Allowing access would require every precondition check to be replicated on the Wasm side as well as in the main JS runtime (or build a unified runtime but optimization strategies are often different).
The new Wasm-GC (finally usable with Safari since late last year) allows GC'd Catgory 2 languages to be built directly to Wasm (and not ship their own GC via Cat 1 emulation like C#/Blazor) or be compiled to JS, and even here they punted any access to category 3 (JS) objects, basically marking them as opaque objects that can be referred and passed back to JS (improvement over previous WASM since there is no extra GC synching as one GC handles it all but still no direct access standardized iirc).
So, security has so far taken a center stage over usability. They fix things as people complain but it's not a fast process.
WASM is just hotfixing javascript to use any language people want.
It's all about javascript being popular and being the standard language, js is not a great language, but it's standard across every computer, which dwarfs anything that can be said about js.
Adjusting browsers so they can use WASM was easy to do, but telling browser vendors to make the DOM work was obviously more difficult, because they might handle the DOM in various ways.
It's not just the DOM, it's also all other APIs like WebGL2.
I ended up having to rewrite the entire interfacing layer of my mobile application (which used to be WebAssembly running in WebKit/Safari on iOS) because I was getting horrible performance losses each time I crossed that barrier. For graphics applications where you have to allocate and pass buffers or in general piping commands, you take a horrible hit. Firefox and Chrome on Windows/macOS/Linux did quite well, but Safari...
Everything has to pass the JavaScript barrier before it hits the browser. It's so annoying!
The web is a platform that has so much unrealized potential that is absolutely wasted.
Wasm is the perfect example of this - it has the potential to revolutionize web (and desktop GUI) development but it hasn't progressed beyond niche single threaded use cases in basically 10 years.
It should never have been web assembly. WASM is the fulfillment of the dream that started with Java VM in the 90’s but never got realized. A performant, truly universal virtual machine for write-once, run anywhere deployment. The web part is a distraction IMHO.
Law of question marks on headlines holds here: no / never seems to be the answer.
Article l also discussed ref types, which do exist and do provide... Something. Some ability to at least refer to host objects. It's not clear what that enables or what it's limitstions are.
Definitely some feeling of being rug-pulled in the shift here. It felt like there was a plan for good integration, but fast forward half a decade+ and there's been so so much progress and integration but it's still so unclear how WebAssembly is going to alloy the web, seems like we have reams of generated glue code doing so much work to bridge systems.
Very happy that Dan at least checked in here, with a state of the wasm for web people type post. It's been years of waiting and wondering, and I've been keeping my own tabs somewhat through twists and turns but having some historical artifact, some point in time recap to go look at like this: it's really crucial for the health of a community to have some check-ins with the world, to let people know what to expect. Particularly for the web, wasm has really needed an update State of the Web WebAssmebly.
I wish I felt a little better though! Jco is amazing but running a js engine in wasm to be able to use wasm-components is gnarly as hell. Maybe by 2030 wasm & wasm-components will be doing well enough that browsers will finally rejoin the party & start implementing new.
"Definitely some feeling of being rug-pulled in the shift here."
Definitely feeling rug-pulled.
What I think all the people that hark on the "Don't worry, going through JS is good enough for you." are missing is the subtext of their message. They might objectively be right, but in the end what they are saying is that they are content with WASM being a second class citizen in the web world.
This might be fine for everyone needing a quick and dirty solution now, but it is not the kind of narrative that draws in smart people to support an ecosystem in the long run. When you bet, you bet on the rider and not the domestique.
Reference types makes wasm/js interoperability way cleaner and easier. wasm-gc added a way to test a function pointer for whether it will trap or not.
And JSPI is a standard since April and available in Chrome >= 137. I think JSPI is the greatest step forward for webassembly in the browser ever. Just need Firefox and Safari to implement it...
Is there any data on the performance cost of JS/WASM context switches? The way the architecture is described, it sounds as if the costs could be substantial, but the approaches described in the article basically hand them out like candy.
This would sort of defeat the point that WASM is supposed to be for the "performance critical" parts of the application only. It doesn't seem very useful if your business logic runs fast, but requires so many switching steps that all performance benefits are undone again.
Yeah, it's very unfortunate for WebGL/WebGPU apps, where every call has to pass/convert typed arrays and issue a js gl call. It pretty much kills any advantage of using WASM. Hope that changes.
Not entirely sure, but C#'s Blazor is amazing. I can stick to purely C# code, front-end and back-end, we rarely call out to JS unless its for like file uploading dialogs. I don't want to ever touch JavaScript again after this workflow.
Edit:
And if you don't want to do "WebAssembly" you can have it do it all server rendered, think of a SPA on steroids.
This problem is how you spot people that have tried to do it vs those that just talk about it. Everyone ends up with batching calls back and forth because the cost is so high.
Separately the conceptual mismatch when the js has to allocate/deallocate things on the wasm side is also tedious to deal with.
I want DOM access from WASM, but I don't want WASM to have to rely on UTF-16 to do it (DOMString is a 16-bit encoding). We already have the js-string-builtins proposal which ties WASM a little closer to 16-bit string encodings and I'd rather not see any more moves in that direction. So I'd prefer to see an additional DOM interface of DOMString8 (8-bit encoding) before providing WASM access to DOM apis. But I suspect the interest in that development is low.
Tbh I would be surprised if converting between UTF-8 and JS strings is the performance bottleneck when calling into JS code snippets which manipulate the DOM.
In any case, I would probably define a system which doesn't simply map the DOM API (objects and properties) into a granular set of functions on the WASM side (e.g. granular setters and getters for each DOM object property).
Instead I'd move one level up and build a UI framework where the DOM is abstracted away (quite similar to all those JS frameworks), and where most of the actual DOM work happens in sufficiently "juicy" JS functions (e.g. not just one line of code to set a property).
I'm worried that wide use of WASM is going to reduce the amount of abilities extensions have. Currently a lot of websites are basically source-available by default due to JS.
With minimisers and obfuscators I don't see wasm adding to the problem.
I felt something was really lost once css classes became randomised garbage on major sites. I used to be able to fix/tune a website layout to my needs but now it's pretty much a one-time effort before the ids all change.
> Currently a lot of websites are basically source-available by default due to JS.
By default maybe, but JS obfuscators exist so not really. Many websites have totally incomprehensible JS even without obfuscators due to extensive use of bundlers and compile-to-JS frameworks.
I expect if WASM gets really popular for the frontend we'll start seeing better tooling - decompilers etc.
I am confused by this. If WASM is a VM then why would it understand the DOM? To me it akin to asking "When will Arm get DOM support?" Seems like the answer is "When someone writes the code that runs on WASM that interacts with the DOM." Am I missing something? (not a web dev.)
Support is far from perfect, but we're moving towards a much more extensible and generic way to support interacting with the DOM from WebAssembly -- and we're doing it via the Component Model and WebAssembly Interface Types (WIT) (the "modern" in "modern" WebAssembly).
What's stopping us the most from being very effective in browsers is the still-experimental browser shim for components in Jco specifically. This honestly shouldn't be blocking us at this point but... It's just that no one has gotten around to improving and refactoring the bindings.
That said, the support for DOM stuff is ready now (you could use those WIT interfaces and build DOM manipulating programs in Rust or TinyGo or C/C++, for example).
P.S. If you're confused about what a "component" is or what "modern" WebAssembly means, start here:
I have zero experience with anything wasm, just regular old DOM with typescript, but I wonder if this is the kind of problem that could be addressed the same way that phoenix liveview addesses frontend updates, by message passing only with diff changes, and delegate the Dom manipulation to what works, effective modelling the wasm runtime as an actor.
I don't think I want WebAssembly to have DOM support.
Would it be nice? Yes. But.
Every added feature is a trade-off between need -vs- outlay, overhead, complexity & other drawbacks. In order to justify the latter things, that "need" must be significant enough. I'd like to have DOM, but I don't feel the need is significant.
Some thoughts on use-cases:
1. "Inactive" or "in-instance" DOM APIs for string parsing, document creation, in-memory node manipulation, serialisation: this is all possible today in WASM with libraries. Having it native might be cool but it's not going to be a significantly different experience. The benefits are marginal here.
2. "Live / active" or "in-main-thread" direct access APIs to manipulate rendered web documents from a WASM instance - this is where the implementation details get extremely complex & the security surface area starts to really widen. While the use-cases here might be a bit more magical than in (1), the trade-offs are much more severe. Even outside of security, the prospect of WASM code "accidently" triggering paints, or slow / blocking main thread code hooked on DOMMutation events is a potential nightmare. Trade-offs definitely not worth it here.
Besides, if you really want to achieve (2), writing an abstraction to link main-thread DOM APIs to WASM postMessage calls isn't a big lift & serves every reasonable use-case I can think of.
You can effectively try this today in Dart. When running in the browser, Dart can compile to either JavaScript or Wasm and both backends support DOM access via https://pub.dev/packages/web.
The DOM access in Wasm does trampoline through JavaScript under the hood, which introduces some overhead. Dart uses WasmGC, though, which is supported on Chrome/FF/Safari and lowers that overhead by enabling objects to be shared across the Wasm / JS boundary. In the benchmarks we've tried, the overhead is not that noticeable. But direct access (from Wasm) would be faster.
This app is too small to show benefits, but the code size is about the same across the two and similar to those at todomvc.com. We are seeing potential benefits on page load time (Wasm is faster to parse/initialize) and compute (Wasm is faster on many workloads).
Why does WASM need to manipulate the DOM when JS already excels at that? Interfacing with JS was never really an issue; yes you do have to design reasonable module boundaries and understand how data is going to be shared. That just leads to simpler / stronger program design.
If you're writing a DOM UI heavy app, use JavaScript. Many WASM apps, like games, have no interest in the DOM. It's just more spec bloat.
The fact that this needs to be explained with mentions of pointers and data alignment and garbage collection tells me that the decisions that the web standards committees make continue to be just completely disconnected from anything sane.
maybe my read is wrong, but everything i look at today just screams to me that the web is extremely poorly designed; everything about it is simply wrong.
Actually my journey was quite similar. I started to build a bindings and web components framework in Pure Go so that I can build user interfaces with webview/webview.
My apps just go:embed all their assets and spawn a local webview as their UI, which is quite nice because client and server use the same schemas and same validations for e.g. web forms and the fetch/REST APIs.
Server-side-rendered components are implemented using a web components graph whose components can be String()ified into HTML.
It's a bit Experimental though, and the API in the components graph might change in the future:
Things like QT and browsers became popular because people realize they could short circuit OS vendors asking developer to be loyal to them. The glue won.
But QT and browsers and JS are just hotfixes, they're not sound technologies, they're just glue.
Has anybody written a nice DOM wrapper for C++/Rust? So you can do everything from the comforts of the C++/Rust application? The API whould match JavaScript APIs as much as possible.
[+] [-] sfvisser|7 months ago|reply
But it’s safe to say that the interaction layer between the two is extremely painful. We have nicely modeled type-safe code in both the Rust and TypeScript world and an extremely janky layer in between. You need a lot of inherently slow and unsafe glue code to make anything work. Part is WASM related, part of it wasm-bindgen. What were they thinking?
I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often. That it fits the purpose more of heaving longer running compute in the background and bring over some chunk of data in the end. Why create a generic bytecode execution platform and limit the use case so much? Not everyone is building an in-browser crypto miner.
The whole WASM story is confusing to me.
[+] [-] BlackFly|7 months ago|reply
So on the one side you have organizations that definitely don't want to easily give network/filesystem/etc. access to code and on the other side you have people wanting it to be easier to get this access. The browser is the main driving force for WASM, as I see it, because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container. So WASM doesn't really have much impetus to improve beyond compute.
[+] [-] fsloth|7 months ago|reply
I agree WASM has it’s drawbacks but the execution model is mostly fine for these types of task where you offload the task to a worker and are fine waiting a millisecond or two for the response.
The main benefit for complex tasks like above is that when a product needs to support isomorphic web and native experience - quite many use cases actually in CAD, graphics & gis) - based on complex computation you maintain, the implementation and maintenance load drops to a half. Ie these _could_ be eg typescript but then maintaining feature parity becomes _much_ more burdensome.
[+] [-] flohofwoe|7 months ago|reply
It's fine and fast enough as long as you don't need to pass complex data types back and forth. For instance WebGL and WebGPU WASM applications may call into JS thousands of times per frame. The actual WASM-to-JS call overhead itself is negligible (in any case, much less than the time spent inside the native WebGL or WebGPU implementation), but you really need to restrict yourself to directly passing integers and floats for 'high frequency calls'.
Those problems are quite similar to any FFI scenario though (e.g. calling from any high level language into restricted C APIs).
[+] [-] weinzierl|7 months ago|reply
https://github.com/ealmloff/sledgehammer_bindgen
[+] [-] chrismorgan|7 months ago|reply
How would you make such a thing without limiting it in some such way?
[+] [-] ifwinterco|7 months ago|reply
If they introduced a WASM API it would perpetually be a few months/years behind the JS one, any new features would have to be implemented in both etc.
I can see why it's not happened
(edit) And yes, I think the intention of WASM was either heavy processing, or UI elements more along the lines of what used to be done with Java applets etc. potentially using canvas and bypassing the DOM entirely, not as an alternative to JS for doing `document.createElement`
[+] [-] whizzter|7 months ago|reply
First off, remember that initially all we had was JS, then Asm.JS was forced down Apple throats by being "just" a JS compatible performance hack (remember that Google had tried to introduce NaCl beforehand but never got traction). You can still see the Asm.JS lineage in how Wasm branching opcodes work (you can always easily decompose them into while loops together with break and continue instructions).
The target market for NaCl, Asm.JS and Wasm seems to have been focused on enabling porting C/C++ games even if other usages was always of interest, so while interop times can be painful it's usually not a major factor.
Secondly, As a compiler maker (and to look at performance profiles), I usually place languages into 3 categories.
Category 1: Plain-memory-accessors, objects are usually a pointer number + offsets for members, more or less manually managed memory. Cache friendlyness is your own worry, CPU instructions are always simple.
C, C++, Rust, Zig, Wasm/Asm.JS, etc goes here.
Category 2: GC'd offset-languagses, while we still have pointers(now called references) they're usually restricted from being directly mutated, instead going through specialized access instructions, however as with category 1 the actual value can often be accessed with the pointer+offset and object layouts are _fixed_ so less freedom vs JS but higher perf.
Also there can often be GC-specific instructions like read/write-barriers associated with object accesses. Performance for actual instructions is still usually good but GC's can affect access patterns to increase costs and some GC collection unpredictability.
Java, C#, Lisps, high perf functional languages,etc usually belong here (with exceptions).
Category 3: GC'd free-prop languages, objects are no longer of fixed size (you can add properties after creation), runtimes like V8 tries their best to optimize this away to approach Category 2 languages but abuse things enough and you'll run out a performance cliff. Every runtime optimization requires _very careful_ design of fallbacks that can affect practically almost any other part of the runtime (these manifest as type-confusion vulnerabilities if you look at bug-reports) as well as how native-bindings are handled.
JS, Python, Lua, Ruby, etc goes here.
Naturally some languages/runtimes can straddle these lines (.NET/CIL has always been able to run C as well as later JS, Ruby and Python in addition to C# and today C# itself is gaining many category 1 features), I'm mostly putting the languages into the categories where the majority of user created code runs.
To get back to the "troubles" of Wasm<->JS, as you noticed they are of category 1 and 3, since Wasm is "wrapped" by JS you can usually reach into Wasm memory from JS since it's "just an buffer", the end-user security implications are fairly low since the JS has well defined bounds checking (outside of performance costs).
The other direction is a pure clusterf from a compiler writers point of view, remember that most of these optimizations of Cat 3 languages have security implications? Allowing access would require every precondition check to be replicated on the Wasm side as well as in the main JS runtime (or build a unified runtime but optimization strategies are often different).
The new Wasm-GC (finally usable with Safari since late last year) allows GC'd Catgory 2 languages to be built directly to Wasm (and not ship their own GC via Cat 1 emulation like C#/Blazor) or be compiled to JS, and even here they punted any access to category 3 (JS) objects, basically marking them as opaque objects that can be referred and passed back to JS (improvement over previous WASM since there is no extra GC synching as one GC handles it all but still no direct access standardized iirc).
So, security has so far taken a center stage over usability. They fix things as people complain but it's not a fast process.
[+] [-] api|7 months ago|reply
That describes much of modern computing.
[+] [-] Muromec|7 months ago|reply
Think of it as a backend and not as library and it clicks.
[+] [-] jokoon|7 months ago|reply
It's all about javascript being popular and being the standard language, js is not a great language, but it's standard across every computer, which dwarfs anything that can be said about js.
Adjusting browsers so they can use WASM was easy to do, but telling browser vendors to make the DOM work was obviously more difficult, because they might handle the DOM in various ways.
Not to mention js engines are very complicated.
[+] [-] WhereIsTheTruth|7 months ago|reply
Trying to shoehorn Rust as a web scripting language was your second mistake
Your first mistake was to mix Rust, TypeScript and JavaScript only just to add logic to your HTML buttons
I swear, things get worse every day on this planet
[+] [-] evrimoztamur|7 months ago|reply
I ended up having to rewrite the entire interfacing layer of my mobile application (which used to be WebAssembly running in WebKit/Safari on iOS) because I was getting horrible performance losses each time I crossed that barrier. For graphics applications where you have to allocate and pass buffers or in general piping commands, you take a horrible hit. Firefox and Chrome on Windows/macOS/Linux did quite well, but Safari...
Everything has to pass the JavaScript barrier before it hits the browser. It's so annoying!
[+] [-] apatheticonion|7 months ago|reply
Wasm is the perfect example of this - it has the potential to revolutionize web (and desktop GUI) development but it hasn't progressed beyond niche single threaded use cases in basically 10 years.
[+] [-] adastra22|7 months ago|reply
[+] [-] lbotos|7 months ago|reply
I’ve personally felt like it has been progressing, but I’m hoping you can expand my understanding!
[+] [-] 3cats-in-a-coat|7 months ago|reply
[+] [-] jauntywundrkind|7 months ago|reply
Article l also discussed ref types, which do exist and do provide... Something. Some ability to at least refer to host objects. It's not clear what that enables or what it's limitstions are.
Definitely some feeling of being rug-pulled in the shift here. It felt like there was a plan for good integration, but fast forward half a decade+ and there's been so so much progress and integration but it's still so unclear how WebAssembly is going to alloy the web, seems like we have reams of generated glue code doing so much work to bridge systems.
Very happy that Dan at least checked in here, with a state of the wasm for web people type post. It's been years of waiting and wondering, and I've been keeping my own tabs somewhat through twists and turns but having some historical artifact, some point in time recap to go look at like this: it's really crucial for the health of a community to have some check-ins with the world, to let people know what to expect. Particularly for the web, wasm has really needed an update State of the Web WebAssmebly.
I wish I felt a little better though! Jco is amazing but running a js engine in wasm to be able to use wasm-components is gnarly as hell. Maybe by 2030 wasm & wasm-components will be doing well enough that browsers will finally rejoin the party & start implementing new.
[+] [-] weinzierl|7 months ago|reply
Definitely feeling rug-pulled.
What I think all the people that hark on the "Don't worry, going through JS is good enough for you." are missing is the subtext of their message. They might objectively be right, but in the end what they are saying is that they are content with WASM being a second class citizen in the web world.
This might be fine for everyone needing a quick and dirty solution now, but it is not the kind of narrative that draws in smart people to support an ecosystem in the long run. When you bet, you bet on the rider and not the domestique.
[+] [-] hoodchatham|7 months ago|reply
And JSPI is a standard since April and available in Chrome >= 137. I think JSPI is the greatest step forward for webassembly in the browser ever. Just need Firefox and Safari to implement it...
[+] [-] xg15|7 months ago|reply
This would sort of defeat the point that WASM is supposed to be for the "performance critical" parts of the application only. It doesn't seem very useful if your business logic runs fast, but requires so many switching steps that all performance benefits are undone again.
[+] [-] markdog12|7 months ago|reply
[+] [-] afiori|7 months ago|reply
It is still the same jit calling itself, there is no reason it should be far slower than js-to-js
[+] [-] giancarlostoro|7 months ago|reply
Edit:
And if you don't want to do "WebAssembly" you can have it do it all server rendered, think of a SPA on steroids.
[+] [-] breve|7 months ago|reply
https://www.youtube.com/watch?v=4KtotxNAwME
https://www.youtube.com/watch?v=V1cqQRmVAK0
[+] [-] fidotron|7 months ago|reply
Separately the conceptual mismatch when the js has to allocate/deallocate things on the wasm side is also tedious to deal with.
[+] [-] theSherwood|7 months ago|reply
[+] [-] flohofwoe|7 months ago|reply
In any case, I would probably define a system which doesn't simply map the DOM API (objects and properties) into a granular set of functions on the WASM side (e.g. granular setters and getters for each DOM object property).
Instead I'd move one level up and build a UI framework where the DOM is abstracted away (quite similar to all those JS frameworks), and where most of the actual DOM work happens in sufficiently "juicy" JS functions (e.g. not just one line of code to set a property).
[+] [-] CaptainFever|7 months ago|reply
[+] [-] Fluorescence|7 months ago|reply
I felt something was really lost once css classes became randomised garbage on major sites. I used to be able to fix/tune a website layout to my needs but now it's pretty much a one-time effort before the ids all change.
[+] [-] IshKebab|7 months ago|reply
By default maybe, but JS obfuscators exist so not really. Many websites have totally incomprehensible JS even without obfuscators due to extensive use of bundlers and compile-to-JS frameworks.
I expect if WASM gets really popular for the frontend we'll start seeing better tooling - decompilers etc.
[+] [-] MisterTea|7 months ago|reply
[+] [-] hardwaresofton|7 months ago|reply
Just a note, but there is burgeoning support for this in "modern" WebAssembly:
https://github.com/bytecodealliance/jco/tree/main/examples/c...
If raw WebIDL binding generation support isn't interesting enough:
https://github.com/bytecodealliance/jco/blob/main/packages/j...
https://github.com/bytecodealliance/jco/blob/main/packages/j...
https://github.com/bytecodealliance/jco/blob/main/packages/j...
Support is far from perfect, but we're moving towards a much more extensible and generic way to support interacting with the DOM from WebAssembly -- and we're doing it via the Component Model and WebAssembly Interface Types (WIT) (the "modern" in "modern" WebAssembly).
What's stopping us the most from being very effective in browsers is the still-experimental browser shim for components in Jco specifically. This honestly shouldn't be blocking us at this point but... It's just that no one has gotten around to improving and refactoring the bindings.
That said, the support for DOM stuff is ready now (you could use those WIT interfaces and build DOM manipulating programs in Rust or TinyGo or C/C++, for example).
P.S. If you're confused about what a "component" is or what "modern" WebAssembly means, start here:
https://component-model.bytecodealliance.org/design/why-comp...
If you want to dive deeper:
https://github.com/WebAssembly/component-model
[+] [-] gchamonlive|7 months ago|reply
[+] [-] Havoc|7 months ago|reply
One of the reasons I’m interested in wasm is to get away from the haphazardly evolved JS ecosystem…
[+] [-] lucideer|7 months ago|reply
Would it be nice? Yes. But.
Every added feature is a trade-off between need -vs- outlay, overhead, complexity & other drawbacks. In order to justify the latter things, that "need" must be significant enough. I'd like to have DOM, but I don't feel the need is significant.
Some thoughts on use-cases:
1. "Inactive" or "in-instance" DOM APIs for string parsing, document creation, in-memory node manipulation, serialisation: this is all possible today in WASM with libraries. Having it native might be cool but it's not going to be a significantly different experience. The benefits are marginal here.
2. "Live / active" or "in-main-thread" direct access APIs to manipulate rendered web documents from a WASM instance - this is where the implementation details get extremely complex & the security surface area starts to really widen. While the use-cases here might be a bit more magical than in (1), the trade-offs are much more severe. Even outside of security, the prospect of WASM code "accidently" triggering paints, or slow / blocking main thread code hooked on DOMMutation events is a potential nightmare. Trade-offs definitely not worth it here.
Besides, if you really want to achieve (2), writing an abstraction to link main-thread DOM APIs to WASM postMessage calls isn't a big lift & serves every reasonable use-case I can think of.
[+] [-] flakiness|7 months ago|reply
https://queue.acm.org/issuedetail.cfm?issue=3747201
[+] [-] jazzypants|7 months ago|reply
I also don't think it has been posted here, so feel free to do so.
[+] [-] vsmenon|7 months ago|reply
The DOM access in Wasm does trampoline through JavaScript under the hood, which introduces some overhead. Dart uses WasmGC, though, which is supported on Chrome/FF/Safari and lowers that overhead by enabling objects to be shared across the Wasm / JS boundary. In the benchmarks we've tried, the overhead is not that noticeable. But direct access (from Wasm) would be faster.
Jaspr (https://jaspr.site) is a react-style framework that sits on top of this. You can see example usage here: https://github.com/vsmenon/todomvc/
This app is too small to show benefits, but the code size is about the same across the two and similar to those at todomvc.com. We are seeing potential benefits on page load time (Wasm is faster to parse/initialize) and compute (Wasm is faster on many workloads).
[+] [-] pton_xd|7 months ago|reply
If you're writing a DOM UI heavy app, use JavaScript. Many WASM apps, like games, have no interest in the DOM. It's just more spec bloat.
[+] [-] naikrovek|7 months ago|reply
maybe my read is wrong, but everything i look at today just screams to me that the web is extremely poorly designed; everything about it is simply wrong.
[+] [-] cookiengineer|7 months ago|reply
My apps just go:embed all their assets and spawn a local webview as their UI, which is quite nice because client and server use the same schemas and same validations for e.g. web forms and the fetch/REST APIs.
Server-side-rendered components are implemented using a web components graph whose components can be String()ified into HTML.
It's a bit Experimental though, and the API in the components graph might change in the future:
https://github.com/cookiengineer/gooey
[+] [-] jokoon|7 months ago|reply
Things like QT and browsers became popular because people realize they could short circuit OS vendors asking developer to be loyal to them. The glue won.
But QT and browsers and JS are just hotfixes, they're not sound technologies, they're just glue.
[+] [-] edg5000|7 months ago|reply
[+] [-] the_duke|7 months ago|reply
Has been used by most of the Rust web frontend frameworks for years.
It all has to go through JS shims though, limiting the performance potential.
[1] https://docs.rs/web-sys/latest/web_sys/
[+] [-] msie|7 months ago|reply