top | item 33228891

Wasm-service: Htmx, WebAssembly, Rust, ServiceWorker proof of concept

216 points| richardanaya | 3 years ago |github.com | reply

81 comments

order
[+] chrismorgan|3 years ago|reply
The demo also demonstrates why something like this is insufficient: you can’t rely on the service worker loading. Service workers must be optional. There’s a reason invocations always start by checking if navigator.serviceWorker even exists, and why navigator.serviceWorker.register() returns a Promise, and why the ServiceWorkerRegistration type is comparatively complicated: service workers aren’t permitted in all contexts, and even when they are, they’re installed asynchronously in the background after the page finishes loading.

To show this easily, try this in a Firefox Private Browsing window: service workers are disabled there, so the request goes through to the server, which responds 405 Method Not Allowed.

So in order to make this approach reliable, you’ll need to run that code on your server, not just in the service worker. This is totally viable, but an approach that has seen surprisingly little experimentation. (I myself have a grand scheme that’s actually fairly fleshed out, but I’ve only touched it once in the last year and a half, so don’t hold your breath. My scheme is predicated upon aggressively not requiring client-side scripting, but using it only for completely optional enhancement; and upon running the same source code in a worker, in a service worker, at the edge or on the origin server, with only a teeny bit of JavaScript on the main thread.)

[+] thagsimmons|3 years ago|reply
This is too negative - it's fine to require service workers if it's justified. Service workers work in private browsing mode in Safari, Chrome and Brave, and is only unimplemented in Firefox because it "hasn't been a priority yet":

https://bugzilla.mozilla.org/show_bug.cgi?id=1320796

Firefox will eventually catch up. The possibilities for this approach are actually quite interesting, and go far beyond the much less useful scheme you outlined that relegates it strictly to just being an optimization.

[+] buro9|3 years ago|reply
I RTFA first and couldn't get it working, came here and saw your comment and now know why it doesn't work.

I may be a minority weirdo, but I have set dom.serviceWorkers.enabled to false.

Why? Because legit sites aren't broken by this and I always clear all browsers cache and databases on close (several times per day). IMHO service workers are used mostly for long-lived tracking, to supercede the cookie, and I also dislike background processing for things I just don't care about.

So for me... service workers are disabled. They just don't work. And it turns out the web works just fine.

[+] nymanjon|3 years ago|reply
I've created an offline-first web app which is based on service workers. I've created another one that could be pushed to the back end (like on Node.js) that would be just a straight MPA app. I guess both of them could be pushed to the back end if needed. Since I just use them for myself I don't worry about them not working without JS enabled. But I created HTMF, similar to HTMX but made to be a progressive enhancement from the get-go.

https://github.com/jon49/Soccer

https://github.com/jon49/WeightTracker

https://github.com/jon49/MealPlanner

https://github.com/jon49/htmf

[+] nabakin|3 years ago|reply
I'm more optimistic about using Service Workers in this way, but I agree this was definitely not their intended use case so there will be problems.

There's another flaw I see with using Service Workers in this way. After a period of time where the Service Worker is inactive (a minute or so on Firefox), the browser will shutdown the Service Worker. When it shuts down, all variables in the Service Worker scope are freed and I assume the WebAssembly server instance as well. How would you maintain server state, in this scenario?

[+] koonsolo|3 years ago|reply
Firefox is running behind on a lot of the latest features. And considering their market share, sometimes it's just less effort to ignore its limitations.
[+] cash22|3 years ago|reply
What contexts are they not permitted? I know that not all browsers support them, but are there other limitations?
[+] Gunax|3 years ago|reply
Not sure i really get it.

So we used to have react/vue/whatever. You click button, and the front end computes the new DOM. No server needed.

Then, we decide to use htmx, where the server actually computes the new DOM elements, and the client is just a dumb client that displays whatever the server gives.

And now, we give the users browser all of the information to act like a server, intercept the call, and compute what the server would have responded with (saving since network packets and lag).

Did I understand that correctly?

And yea, I also saw that talk about HTMX on HN yesterday... didn't really understand the advantage then either.

[+] dragonelite|3 years ago|reply
Man in my 8 years of development

I have seen the transition away from server side to SPA, from SPA back to server side. Now people are doing server side in the browser. This industry will never become boring if we keep continuing those cycles.

[+] dezmou|3 years ago|reply
I had the same reaction, maybe this new method of updating the front is more "accessibility friendly"
[+] bkolobara|3 years ago|reply
What a coincidence, I was just discussing on discord a similar approach for our Rust web framework submillisecond[0].

Submillisecond uses lunatic to run Rust code compiled to WebAssembly on the backend. We are working on a LiveView-like library now. And one thing I would love to give developers for free is an offline-first experience. You write everything in Rust, compile it to WebAssembly, run it as a regular backend on lunatic, but also allow for moving the whole server into the browser for a offline experience. If SQLite is used for the DB, it could also potentially run in the browser.

This doesn't need to move the whole app into the browser, but could do so just for more latency sensitive workloads that don't fit LiveView well. Like form validation on every keypress, etc.

[0]: https://github.com/lunatic-solutions/submillisecond

[+] nymanjon|3 years ago|reply
I write my apps like this for my offline first web apps. I thought about this problem for a while think it would be cool to do a totally progressively enhanced experience from no JS all the way to being offline with no need for the back end at all (except for syncing data).

Another option would be, instead of putting SQLite on the front is to use a repo pattern and if it is on the front end use IndexedDB, if it on the back end use SQLite or some other custom implementation for their repo.

https://news.ycombinator.com/item?id=33319875

[+] richardanaya|3 years ago|reply
Saw HTMX on hacker news the other day, thought I’d do this interesting experiment.
[+] sgt|3 years ago|reply
HTMX is very interesting, I saw it presented at DjangoCon.

Being involved in a lot of SPA's these days I can't help think that we spend way too much time building complex frontends and managing state.

Anyone feel the same way? HTMX feels fresh.

[+] CGamesPlay|3 years ago|reply
Any takeaways? Is it a promising direction? Would it be viable to create an “isomorphic rust” web app?
[+] nabakin|3 years ago|reply
How would you maintain state in the service worker when is shutdown by the browser after being idle for a period of time? Or does Htmx not keep state on the server side in the first place?
[+] tannhaeuser|3 years ago|reply
What's the point, though? HTMX or any other SGMLish angle-bracket markup template engine is for content creators/authors, not necessarily developers. Sure you can play connect-the-dots and combine technologies picked up from the HN front page, but if you're using heavyweight developer pipelines such as Rust and WASM with service workers anyway, that's already way out of reach for non-developers, and comes with full developer responsibilities for testing, security, maintenance, build systems, versions, dependencies, and whatnot.
[+] girvo|3 years ago|reply
While this is the least important part of this project: Htmx is great! I've used it quite extensively lately for internal tools at work. AlpineJS has also been useful for things that need a little more JSON-API-driven oomph.
[+] rasso|3 years ago|reply
Intersting. How did you work around the issue that HTMX takes snapshots of the altered DOM (by Alpinejs) for history navigation [0]? This is the biggest issue holding me back from using the two together.

[0] https://github.com/bigskysoftware/htmx/issues/1015

[+] transfire|3 years ago|reply
Interesting. So a front end could utilize HTMX to do dynamic client side rendering without javascript, limiting what goes server-side to only the things the server needs. Sounds very promising.

The code looks a bit complicated though. Some explanation of what’s going on would be helpful.

[+] FpUser|3 years ago|reply
There is no "without javascript" here. There is a javascript library linked to your html and it then modifies / supplements behavior of the DOM elements. Javascript / DOM / HTML / CSS is super dynamic and flexible combo that can do what looks like a miracle to unfamiliar.
[+] aitchnyu|3 years ago|reply
I was actually wondering when fake service worker servers would take web dev by storm and become the preferred way for offline-first web apps. Go to /take-census-with-spotty-connection, a "web app" bundled into a service worker gives a form, submit it and the web app saves it into in-browser database. I imagined it as a pure js solution though.

After a Django+React project demanded 24 hour days from my team, I achieved the insight that with pure Django pages, you call (request) using strings containing function name and parameters like `/articles/page/2` and get an output (response) and the runtime (browser) can memoize (cache) the result since the whole process is... functional. Some of my react pages had a dozen ways of reaching illegal states, many caused by network calls. The former became the ideal to strive for. Hence why I think fake web servers via service workers will be popular for bug free (heh) offline first apps.

[+] hardwaregeek|3 years ago|reply
Haha this is fantastic! Turn something that is insistently server side into something client side. Plus it's using cool tech.
[+] harryvederci|3 years ago|reply
Is using a ServiceWorker required here? I'm not familiar with them yet, maybe someone can explain if the same can be achieved with htmx its `beforeRequest` event (https://htmx.org/events/#htmx:beforeRequest), and cancelling the request after running the wasm code?

I was considering using htmx + wasm for my website. Combined with my one-sqlite-db-per-user infrastructure it may enable me to do some fancy edge computing :)

Glad to see a PoC showing that the htmx + webassembly part of it is possible!

[+] gedw99|3 years ago|reply
This is clever.

Can we render on server , then upgrade to render on client progressively?

Also can we load other wasm using server push in the background . You get runtime plugins then

[+] padjo|3 years ago|reply
Now that we’re heading back to dumping html over the wire I look forward to the reinvigoration of the XSS attack.
[+] RamblingCTO|3 years ago|reply
I didn't read the article (I know, I know), but I think it would be really cool to get to run other languages in the frontend. Javascript just plain sucks. Having something like Kotlin or golang (in a reduced form) would be really neat and could professionalize the frontend even more. I think the frontend is always a bit wild west, compared to most backend implementations. That's not only due to more testing but also stronger language guarantees.
[+] nwsm|3 years ago|reply
If you read the article you might learn something about WebAssembly which will do the things you're talking about.
[+] MuffinFlavored|3 years ago|reply
> Javascript just plain sucks.

Can you say why? I don't agree/blanket statements like this seem sorta silly.

[+] altilunium|3 years ago|reply
Which is faster, v8 javascript or webassembly + rust?
[+] diego_moita|3 years ago|reply
So, for your webpage to start it has first to download a 1.74Mb component?

Just because "it is cool"?

[+] noveltyaccount|3 years ago|reply
It's a proof of concept, and yes, it is cool. HN can be so critical sometimes, sheesh.
[+] quickthrower2|3 years ago|reply
Imagine a line of business web app, where once you are logged in and use the app it downloads this. How is this much different from downloading a desktop app that needs to download a few meg of libraries.

It definitely is not needed for blogs, wikis, news sites etc. This would be for web applications.

[+] panzerboiler|3 years ago|reply
Not because "it is cool" but because it needs that code to operate. Consider this scenario, even for a classic website/blog: you can eagerly download all of the textual content at first load as markdown, cache it in the ServiceWorker together with a markdown parser, and go offline. You can then ask the network, if available, only for downloading images that then get added to the offline cache. Even with a spotty and slow mobile connection, after the initial load you are done with the network and you can read all the content in an efficient, environmental friendly and fast way. When the network is available, you can check if there are updates by sending a small request to the server, and update the cache.
[+] nymanjon|3 years ago|reply
lol, I thought since it was Rust that it would be like 100k or something. I think C# has it's WASM package down to 1MB or so. The implementation I did with just JS is about 50k or so, total for the app and everything. That is a bit disappointing.
[+] aledalgrande|3 years ago|reply
Have you seen how much you need to download to make Figma work? Not saying that every page should be like that but...