It's kinda funny to me that many of the "pros" of this approach are the exact reasons so many abandoned MPAs in the first place.
For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.
Also, there was a push to move the shitty code from the server to the client to free up server resources and prevent your servers from ruining the experience for everyone.
We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.
I use tech like HTMX because, as a team of one, I have no other choice.
I tried using Angular in 2019, and it nearly sank me. The dependency graph was so convoluted that updates were basically impossible. Having a separate API meant that I had to write everything twice. My productivity plummeted.
After that experience, I realized that what works for a front-end team may not work for me, and I went back to MPAs with JavaScript sprinkled in.
This year, I've looked at Node again now that frameworks like Next offer a middle ground with server-side rendering, but I'm still put off by the dependency graphs and tooling, which seems to be in a constant state of flux. It seems to offer great benefits for front-end teams that have the time to deal with it, but that's not me.
All this to say pick the right tool for the job. For me, and for teams going fuller stack as shops tighten their belts, that's tech like HTMX, sprinkled JavaScript, and sometimes lightweight frameworks like Alpine.
i am the creator of htmx, this is a great article that touches on a lot of the advantages of the hypermedia approach (two big ones: simplicity & it eliminates the two-codebase problem, which puts pressure on teams to adopt js on the backend even if it isn't the best server side option)
hypermedia isn't ideal for everything[1], but it is an interesting & useful technology and libraries like htmx make it much more relevant for modern development
we have a free book on practical hypermedia (a review of concepts, old web 1.0 style apps, modernized htmx-based apps, and mobile hypermedia based on hyperview[2]) available here:
We've been using similar architecture at Yahoo for many years now. We tried to go all in on a React framework that worked on the server and client, but the client was extremely slow to bootstrap due to downloading/parsing lots of React components, then React needing to rehydrate all the data and re-render the client. Not to mention rendering an entire React app on the server is a huge bottleneck for performance (can't wait for Server Components / Suspense which are supposed to make this better ... aside: we had to make this architecture ourselves to split up one giant React render tree into multiple separate ones that we can then rehydrate and attach to on the client)
We've moved back to an MPA structure with decorated markup to add interactivity like scroll views, fetching data, tabs and other common UX use cases. If you view the source on yahoo.com and look for "wafer," you can see some examples of how this works. It helps to avoid bundle size bloat from having to download and compile tons of JS for functionality to work.
For a more complex, data-driven site, I still think the SPA architecture or "islands" approach is ideal instead of MPA. For our largely static site, going full MPA with a simple client-side library based on HTML decorations has worked really well for us.
This is a necessity as long as latencies between the client and server are large enough to be perceptible to a human (i.e. almost always in a non-LAN environment).
[edit]
I also just noticed:
> ...these applications will be unusable & slow for those on older hardware or in locations with slow and unreliable internet connections.
The part about "slow and unreliable internet connections" is not specific to SPAs If anything a thick client provides opportunities to improve the experience for locations with slow and unreliable internet connections.
[edit2]
> If you wish to use something other than JavaScript or TypeScript, you must traverse the treacherous road of transpilation.
This is silly; I almost exclusively use compiled languages, so compilation is happening no matter what; targeting JS (or WASM) isn't that different from targeting a byte-code interpreter or hardware...
--
I like the idea of HTMX, but the first half of the article is a silly argument against SPAs. Was the author "cheating" in the second half by transpiling clojure to the JVM? Have they tested their TODO example on old hardware with an unreliable internet connection?
Everybody's arguing about whether Htmx can do this or that, or how it handles complex use case x, but Htmx can do 90% of what people need in an extremely simple and straight-forward way. That means it (or at least its approach) won't disappear.
A highly complex stock-trading application should absolutely not be using Htmx.
But a configuration page? A blog? Any basic app that doesn't require real-time updates? Htmx makes much more sense for those than React. And those simple needs are a much bigger part of the internet than the Hacker News crowd realizes or wants to admit.
If I could make one argument against SPA's it's not that they don't have their use, they obviously do, it's that we're using them for too much and too often. At some point we decided everything had to be an SPA and it was only a matter of time before people sobered up and realized things went too far.
I really want to switch over to htmx, as I've moved away from SPAs frameworks, and I've been much happier. SPAs have so much abstraction, and modern, vanilla JavaScript is pretty decent to work with.
The thing that keeps holding me back from htmx is that it breaks Content Security Policy (CSP), which means you lose an effective protection against XSS.[0] When I last asked the maintainer about this, the response was that this was unlikely to ever change.[1]
Alpine.js, a similar project to htmx, claims to have a CSP-compatible version,[2] but it's not actually available in any official builds.
People were making this prediction ten years ago. It was wrong then, and it's wrong now.
This article makes its case about Htmx, but points out that its argument applies equally to Hotwired (formerly Turbolinks). Both Htmx and Hotwired/Turbolinks use custom HTML attributes with just a little bit of client-side JS to allow client-side requests to replace fragments of a page with HTML generated on the server side.
But Turbolinks is more than ten years old. React was born and rose to popularity during the age of Turbolinks. Turbolinks has already lost the war against React.
The biggest problem with Turbolinks/Htmx is that there's no good story for what happens when one component in a tree needs to update another component in the tree. (Especially if it's a "second cousin" component, where your parent component's parent component has subcomponents you want to update.)
EDIT: I know about multi-swap. https://htmx.org/extensions/multi-swap/ It's not good, because the onus is on the developer to compute which components to swap, on the server side, but the state you need is usually on the client. If you need multi-swap, you'll find it orders of magnitude easier to switch to a framework where the UI is a pure function of client-side state, like React or Svelte.
Furthermore, in Turbolinks/Htmx, it's impossible to implement "optimistic UI," where the user creates a TODO item on the client side and posts the data back to the server in the background. This means that the user always has to wait for a server round trip to create a TODO item, hurting the user experience. It's unacceptable on mobile web in particular.
When predicting the future, I always look to the State of JS survey https://2022.stateofjs.com/en-US/libraries/front-end-framewo... which asks participants which frameworks they've heard of, which ones they want to learn, which ones they're using, and, of the framework(s) they're using, whether they would use it again. This breaks down into Awareness, Usage, Interest, and Retention.
React is looking great on Usage, and still pretty good on Retention. Solid and Svelte are the upstarts, with low usage but very high interest and retention. Htmx doesn't even hit the charts.
The near future is React. The further future might be Svelte or Solid. The future is not Htmx.
1) "Web application development" doesn't happen in a vacuum. Often it happens in contexts where the "backend" is also consumed by various non-web applications. In those contexts, collapsing the frontend and backend back into 1 component is less of the slam dunk than it's made out to be in this post.
2) The missing piece is how you can achieve this "collapsing" back of functionality into single SSR deployable(s) while still preserving the ability to scale out a large web application across many teams. Microfrontends + microservices could be collapsed into SSR "microapplications" that are embedded into their hosting app using iframes?
Personally I believe strongly in thick clients but this is a pretty neat demo anyways.
I see a lot of resemblance to http://catalyst.rocks with WebComponents that target other components. I think there's something unspoken here that's really powerful & interesting, which is the declarativization of the UI. We have stuff on the page, but making the actions & linkages of what does what to what has so far been trapped in code-land, away from the DOM. The exciting possibility is that we can nicely encode more of the behavior into the DOM, which creates a consistent learnable/visible/malleable pattern for wiring (and rewiring) stuff up. It pushes what hypermedia can capture into a much deeper zone of behaviors than just anchor-tag links (and listeners, which are jump points away from the medium into codespace).
While HTMLX makes some interactions easier for developers without JS
experience, the primary issue in web development is that the browser was not designed for apps. It evolved unevenly from a document navigation platform, and many things we do in web development today are hacks due to the lack of a better solution.
In my opinion, the future of the web as a platform is about viewing the web browser as an operating system with basic composable primitives.
HTMLX adds attributes to HTML using JS, and the argument about "no-JavaScript" is misleading: with HTMLX you can write interactions without JS, but HTMX uses JS. But, as it forces you to use HTML constructs that will work without scripts (such as forms), the page will fall back. It doesn't means that the fallback is usable.
The custom HTMLX attributes work because the browser supports extensions of its behavior using JS. If we add those attributes to the standard HTML, the result is more fragmentation and an endless race. The best standard is one that eliminates the need for creating more high-level standards. In my view, a possible evolution of WASM could achieve that goal. It means going in the opposite direction of the article, as clients will do more computing work. In a future like that, you can use HTMLX, SwiftUI, Flutter, or React to develop web apps. The biggest challenge is to balance a powerful OS-like browser like that with attributes like searchability, accessibility, and learnability (the devtools inspect and console is the closest thing to Smalltalk we have today)...even desktop OSs struggle today to provide that.
> HTMX allows you to design pages that fetch fragments of HTML from your server to update the user's page as needed without the annoying full-page load refresh.
I've been on the sidelines for the better part of a decade for frontend stuff, but I was full-stack at a tiny startup in 2012ish that used Rails with partial fragments templates for this. It needed some more custom JS than having a "replacement target" annotation everywhere, but it was pretty straightforward, and provided shared rendering for the initial page load and these updates.
So, question to those who have been active in the frontend world since then: that obviously failed to win the market compared to JS-first/client-first approaches (Backbone was the alternative we were playing with back then). Has something shifted now that this is a significantly more appealing mode?
IIRC, one of the big downsides of that "partial" approach in comparison with SPA-approaches was that we had to still write those JSON-or-XML-returning versions of the endpoints as mobile clients became more prevalent. That seems like it would still be an issue here too.
I just want Visual Basic for the web man. Screw writing lines of code. I want to point and click, drop complex automated objects onto a design, put in the inputs and outputs, and publish it. I don't care how you do it, I don't want to know any of the details. I just want to be able to make things quickly and easily. I don't care about programming, I just want to get work done and move on with my life.
At this rate, when I'm 80 years old we will still be fucking around with these stupid lines of code, hunched over, ruining our eyesight, becoming ever more atrophied, all to make a fucking text box in a monitor pop some text into a screen on another monitor somewhere else in the world. It's absolutely absurd that we spend this much of our lives to do such a dumb thing, and we've been iterating on it for five decades, and it's still just popping some text in a screen, but we applaud ourselves that we're so advanced now because something you can't even see is doing something different in the background.
Thank you for writing this article! I've had similar thoughts for the past 5 years or so.
A lot of the comments here seem to have the approach that there is a single best stack for building web applications. I believe this comes from the fact that as web engineers we have to choose which tech to invest our careers in which is inherently risky. Spend a couples years on something that becomes defunct and it feels like a waste. Also, startup recruiters are always looking for the tech experience that matches the choice of their companies. VCs want to strike while the iron is hot.
Something that doesn't get talked about enough (which the author does mention near the end of article) is that different web apps have different needs. There is 100% a need for SPAs for certain use cases. Messaging, video players, etc. But there are many cases where it is overkill, like the many many CRUD resource apps I've built over the years. Say you have a couple hundred users that need to manage the state of a dozen interconnected resources. The benefits of an MPA are great here. Routing is free, no duplication of FE / BE code. Small teams of devs can ship code and fix bugs very fast which keeps the user feedback loop tight.
I’ve been using HTMX (from Clojure) for projects recently and I have to say I like it a lot. Full-stack web stuff is a hobby for me and I always had trouble really grokking all the parts of SPAs. HTMX fits neatly into my brain’s model of how websites should work.
> ... requires a full page refresh to use ... isn't good enough for many types of web-app we need to make.
> without the annoying full-page load refresh.
This fixation on the page refresh needs to stop. Nearly every single website which has purportedly "saved" page refreshes has brutalized every other aspect of the UX.
This is a good article, and I agree that Htmx brings sanity back to the frontend, but somewhere along the line frontend folks got it in their head that page refreshes were bad, which is incorrect for essentially all CRUD / REST APIs. Unless you're specifically making a complex application that happens to be served through the web, like Kibana or Metabase, then stop harping on page refreshes.
Even this article calls it the annoying refresh. Not the impediment refresh, or the derisive refresh, or the begrieved refresh. Moreover, what exactly is annoying about page refreshes? That there's a brief flash? That it takes ~0.3 seconds to completely resolve?
Users don't care about page refreshes, and in fact they are an indication of normalcy. Upending the entire stack and simultaneously breaking expected functionally to prevent them is madness.
The killer feature of Htmx is that it doesn't upend the entire stack, and you can optimize page refreshes relatively easily. That's great! But even then I'm still not convinced the tradeoff is worth it.
I'm not seeing it. SPAs can be overly complex and have other issues, but I'm not seeing HTMX as a particular improvement.
Also, a bunch of this article doesn't make sense to me.
E.g, one of the listed costs of SPAs is managing state on the client and server... but (1) you don't have to -- isn't it rather common to keep your app server stateless? -- and (2) HTMX certainly allows for client-side and server-side state, so I'm not sure how it's improving things. That is, if you want to carefully manage app state, you're going to need a mechanism to do that, and HTMX isn't going to help you.
It also doesn't somehow prevent a rats nest of tooling or dependencies. It isn't an application framework, so this all depends on how to solve that.
SPA's also aren't inherently "very easy to make [...] incorrectly".
Also, the suggested HTMX approach to no browser-side javascript is very crappy. Your app would have to be very specifically designed to not be utterly horrible w/o JS with such an approach and instead be just pretty horrible. There are just so much more straightforward ways to make apps that work well without JS. Also, this isn't exactly a mainstream requirement in my experience.
I could go on and on. "caching" - htmx doesn't address the hard part caching. "seo-friendliness" - Like all the benefits here attributed to htmx, htmx doesn't particularly help with this and there are many other available way to achieve it.
IDK. These kinds of over-promising hyped up articles give me the feeling the thing being hyped up probably doesn't have a lot of real merit to be explored or else they'd talk about that instead. It also feels dishonest to me, or at least incompetent, so make all of these claims and assertions that aren't really true or aren't really especially a benefit of htmx vs many numerous other options.
I remember fetching HTML from the server with AJAX and updating innerHTML before it was called AJAX. Is HTMX repackaging that or am I missing some exciting breakthrough here?
Server-side apps cannot provide optimistic UI. No matter how you feel about it, they are limited in this capability compared to client-side apps. The user doesn’t care about the technology. For example, imagine a todo app that shows a new todo immediately. Or form validations that happen as soon as data is entered. That’s a superior experience to waiting on the server to continue interaction. Whether that’s harder to engineer is irrelevant to the user. We should be striving for the best possible user experience, not what we as engineers personally find easy or comfortable.
HTMX is cool. HTMX may fit your needs. But it’s not enough for providing the best possible user experience.
I love articles like these, because the narrative of "JS framework peddlers have hoodwinked you!" is fun, in an old-timey snake oil salesman kind of way.
But I'll be honest. I'll believe it when I see it. It's not that htmx is bad, but given the complexity of client-side interactions on the modern web, I can't see it ever becoming really popular.
Some of the specifics in the comparisons are always weird, too.
> Instead of one universal client, scores of developers create bespoke clients, which have to understand the raw data they fetch from web servers and then render controls
according to the data.
This is about client side apps fetching arbitrary JSON payloads, but your htmx backend needs to do the same work, right? You have to work with the raw data you get from your DB (or another service) and then render based on that data.
You're still coupled to the data, and your htmx endpoint is just as "bespoke" as the client code which uses it. It's not wrong to prefer that work be done on the server instead of the client, or vice versa, but we're really just shuffling complexity around.
"If you wish to use something other than JavaScript or TypeScript, you must traverse the treacherous road of transpilation." -- this is the crux of the article.
These kind of takes fall in the bullseye of "I don't want to program with Javascript". The subtext is all about this.
Perhaps.. maybe.. Htmx won't be the future because there are a lot of people that like programming in Javascript?
The problem is that these kind of approaches require more upfront thought, which produces less now, and pays off later... and only if maintained by people in tune with the original design.
I've seen this architectures quickly ruined by 'can-do' people who butcher everything to get a feature done _and_ get a bonus from the management for quick delivery.
I used to have my hand-written mini version of htmlx ten years ago. It took a few jquery lines of code to have small parts of the UX to update without refresh.
I don't see the point by the way, I think htmlx is here to stay and a good choice for many, but it's clearly not a silver bullet. You make decently fast UIs, not blazing fasts, there are no (proper) offline first apps with htmlx, caching is likely more difficult or impossible sometimes and the load for your server is inevitably greater (of course it could be more than acceptable in some cases, so why not?), that also means more bandwidth for your cloud provider as opposed for you cdn. You will still have to write javascript sooner or later.
It depends on what you're doing. Nothing is aprioristic ly "the future", the future is "the future", and it has yet to come.
I first encountered the principles behind htmx in its precursor intercooler.js. Those principles really resonated with my distaste for complexity. Amusingly I found out about htmx itself when rereading https://grugbrain.dev and it all clicked! htmx is crystal that trap internet complexity demon!
Wether it is Htmx or Phoenix/Liveview or Hotwire/Stimulus we're seeing a shift in the industry towards augmented HTML rather than throwing away all RESTFUL routes. REST is elegant and this approach is very powerful as well.
Love articles like this because I know there's some manager somewhere who will read this and force it upon their team without having any idea if it's good or not. And in 3 years we'll have people from those teams complaining about HTMX because it's not suited for their projects.
The future is whatever works best for your use-case.
I don't disagree with the article, but I feel like the author almost landed on an interesting counterpoint. Author points out they didn't do this in ClojureScript, but writing apps in Reagent (the leading ClojureScript React wrapper) has looked almost identical across many years and many versions of React. Many of the state management epochs have also been avoided, because "manage state" is a core idea in Clojure, and so the stuff we had almost a decade ago is still perfectly fine today.
So, I posit that the churn, while definitely real, is not actually intrinsic.
Right now, at Latacora, we're writing a bunch of Clojure. That includes Clerk notebooks, some of which incorporate React components. That's an advantage I think we shouldn't ignore: not needing to write my own, say, Gantt chart component, is a blessing. So, specifically: not only do I think the churn is incidental to the problem, I don't even believe you need to give up compatibility to get it.
Fun fact: despite all of this, a lot of what we're writing is in Clerk, and while that's still fundamentally an SPA-style combination of frontend and backend if you were to look at the implementation, it absolutely _feels_ like an htmx app does, in that it's a visualization of your backend first and foremost (React components notwithstanding).
This puts all the computational load on the server.
Imagine 10s of thousands of clients requesting millions of HTML fragments be put together by a single server maintaining all the states while all the powerful high end computing power at the end user's fingertips goes completely to waste.
I keep saying this and will say it again: what's really needed is a state-ful GUI markup language. HTML+DOM+JS+CSS is the wrong tool for the CRUD/GUI job, and force-fitting it has inflamed the area so bad many don't want to even try to scratch it.
Bloated JS frameworks like Angular, React, Vue, and Electron have big learning curves and a jillion gotcha's because they have to reinvent long-known and loved GUI idioms from scratch, but DOM is inherently defective for that need, meant for static documents. There are just too many GUI needs that HTML/DOM lacks or can't do right:
https://www.reddit.com/r/CRUDology/comments/10ze9hu/missing_...
Let's byte the bullet and create a GUI markup standard. Perhaps base it off Tk or Qt kits to avoid starting from scratch.
HTMX brings new life to tech like django which can catapult MVPs into production asap.
Backend engineers are now able to write management tools and experimental products faster - and then pass the winning products off to a fluttr team to code for all environments. The backend could be converted into a django rest api if the code is properly refactored.
I take the opposite side of that bet. Always bet on (more) Javascript. Dealing with HTML sucks, and we want as little of it as possible, otherwise we would not have invented generations of frameworks to make it manageable. The lasting success of React shows that we have converged on how to do that. Moving back to MPAs is always something that bored engineers want to do. Users generally do not care.
Moreover, REST APIs - and I mean the simple ones people actually want to use, none of that HATEOAS BS - are ubiquitous for all sorts of interactions between web and nonweb clients. Are you going to ship an MPA as your mobile apps, or are you going to just use REST plus whatever clients make sense?
It also makes a lot of sense in terms of organization. Your backend developers probably suck at design, your frontend developers suck at databases.
I can still remember the horrors of page state. The server would keep track of what the client has and only send HTML fragments to the client. Early-days ASP, Prado and the likes did this and it was a terrible idea. HTMX sounds very much like that, but the packaging is nicer. Ultimately, the problem is that sometimes you need to update more than just the tiny, well-defined part that is the todo list and several parts of the UI need to change when a request is made. By which I mean this happens all the time. The road to hell is paved with todo list implementations as a proof that a system works and is good. Please show me a moderately complex login system with interlinking interfaces implemented in this.
Back in the days, when JSON became popular as response type for rendering in the client, I saw arguments such as "the JSON payload is smaller than sending full HTML, so you pay the download only once instead of N times".
Only once, because what has to be done with the JSON has been downloaded in the JS bundle. With full HTML, full HTML comes back in every response.
However, I'm not sure if this is actually a problem or rather depends on how much interaction the user does (so where is the "turning point" of the overhead of having all in the bundle vs full HTML responses). What does everyone think?
I somewhat get where htmx is coming from. It's not bad per-say.. I actually like the general idea behind it (it's sorta like Turbolinks, but a bit more optimal using fragments instead of the entire page, though Turbolinks requires zero additional work on the markup side and works with JavaScript disabled out of the box).
With that being said, I imagine it would become unmaintainable very quickly. The problems htmx is solving are better solved with other solutions in my opinion, but I do think there's something that can be learned or leveraged with the way htmx goes about the solution.
The tricky part of an SPA is that as a developer, you're taking on a lot of the burden of managing location state that in an MPA is handled by the browser. And location state often is a significant component of application state.
Certainly it's possible to take on that burden and execute it well, but I think a lot of teams and businesses don't fully account for the fact that they are doing so and properly deciding if that extra burden is really necessary. The baseline for nailing performance and correctness is higher with an SPA.
Using an HTTP header to decide between "just return a snippet for this specific list element" v. "return the whole page with the updated content for this list element" is an interesting choice that I hadn't really considered before; normally I would've opted for two entirely separate routes (one for the full page, one for the specific hypermedia snippet), which HTMX also seems to support. I guess it ain't fundamentally different from using e.g. Accept-* headers for content negotiation.
meta: I love when htmx is highlighted in HN because the discussions branch into alternatives and different ways of doing web dev. It's very enriching to think outside the box!
I think what is needed is to recognize that the SPA architecture isn't actually just a view processor. IMO it is a very shitty designed:
View rendered <--> client process <--> server process
So it seems that SPA apps load an absolute mountain of javascript into the view (the tab/page) and then that starts (crudely IMO) running as client-side daemon tracking messy state and interfacing with local storage, with javascript (opinion: yuck) ferreted away in a half dozen divs.
IMO, what has been needed since you have local storage and local session state and all that is ... a client daemon that the web page talks to that offers data services, and then that client daemon if it needs server data calls to the internet.
That way local state tracking, transformation, and maintenance can be isolated away from the code of the view. Large amounts of javascript (or maybe all with CSS wizardry is dropped). The "client daemon" can be coded in webassembly, so you aren't stuck with javascript (opinion: yuck).
You can even have more efficient many views/tabs interfacing with the single client daemon, and the client daemon can track and sync data between different tabs/views/windows.
Now, of course that is fucking ripe as hell for abuse, tracking. Not sure how to solve it.
But "separation of concerns" in current web frameworks is a pipe dream.
Why people concentrating on creating more hacks on top of fundamentally ill-suited stack for app development, rather then rethink the whole stack from the first principles?
I like some concepts from HTMX but I don't understand how it tracks the relationship between these addresses and the identifiers in the markup. It seems to be just that the identifier strings match - the markup identifies the targets/swaps and it just refers to itself.
When I compare this to Phoenix LiveView I much prefer LiveView, because it both provides the markup templating engine and tracks the meaning of the relationship, with server-side tokens and methods.
I think it's the wrong article (pun semi-intended), HTMX is a future. React is a future. Svelte is a future. Even Angular is a future. They all have their specific strengths which define where they are more applicable.
There's no "the future" in this area, because demands are very different; a heavily interactive SPA like GMail or Jira has requirements unlike an info page that needs a few bits of interactivity, etc.
I’m afraid htmlx might be repeating the same mistake that CORBA and DCOM made decades ago: pretending that latency doesn’t matter.
Yes you could make a CORBA or DCOM object almost indistinguishable from a local object, except for the latency when it was actually remote. And since it looked like a normal object it encountered “chatty” interface which exacerbated the latency cost.
Htmlx seems pretty chatty to me, which I’m sure works OK over the LAN, but what about the “real” internet?
I agree with this article, however I think that HTMX needs a strong server framework to support HTMX. I've thought about this alot and a couple months back created this deno / typescript framework https://github.com/reggi/htmx-components, would love for people to take a look at it and provide guidance and direction for a releasable version.
I tried htmx, but syntax is horrible, so 3 years ago I've created uajax universal Ajax forms and js-ajax-button.
Add class to any form and it is ajaxed.
I even released it on github
The js-ajax-button has similar approach. Add class to button that have data-url and it will make request to it.
This is small func I use, but with uajax is so powerful, I don't need react or htmx.
But it is hard to sell something that eliminates using javascript.
Lol, we're still having the SPA discussion 7 years later in the year of our lord 2023?
Talk about the positives of YOUR approach, don't tear down a different approach that half the industry is using. You're not going to say anything new or interesting to the person you are trying to convince this way. Experienced engineers already know the trade-offs between an SPA and a server rendered experience.
What's the business case, for them or for developers?
Ideas aside, the web app future belongs to those with the resources to sustain a response, or those who can restore the ability to capture/monetize developers and users in a closed system.
The scope of web apps is broad enough that many technologies arguably have their place. The open javascript ecosystem reduced the cost of creating candidates, but has no real mechanism to declare winners for the purpose of consolidating users, i.e., access to resources.
Careers and companies are built on navigating this complexity, but no one really has the incentive to reduce it unless they can capture that value.
I really appreciate Cloudflare because they are open about both their technology and their business model. They thus offer a reasonable guarantee that they can sustain their service and their technology, without basing that guarantee on the fact that they are a biggie like AWS, Microsoft, or Google (i.e., eating their own dog food, so we can join them at the trough).
The biggest cost in IT is not development or operating fees but reliance and opportunity.
I've made my personal website something of a hybrid SPA. WithJS enabled it only loads and replaces the relevant portions of the page, but a page renders fully from PHP going to it directly.
The JS would be a bit more elegant if script tags didn't need special handling to execute on insertion.
The experience is very seamless this way - I'm very pleased with it. It's live at https://jimm.horse - the dynamic behavior can be found clicking on the cooking icon or N64 logo.
On reading the article, I'll definitely make use of this if it becomes well-supported. It does exactly what I wanted here.
I just started using HTMX in new projects and really like it.
The LivewView/Hotwire/LiveWire way of building applications make a really great tradeoff—the ease of building websites with the speed and power of webapp UX.
I wanted something simple to use with Express and it's been very productive.
There's a few things to get used to, but overall like it and plan to keep using it in my projects.
I worked for a startup that built a React + Scala system for building training sets for machine learning models. At the time I was involved this had a strong research component, particularly we frequently had to roll out new tasks, and in fact we were working actively with new customers to adapt to their needs all the time.
The build for the system took about 20 minutes, and part of the complexity was that every new task (form where somebody had to make a judgement) had to be built twice since both a front end and back end component had to be built so React was part of the problem and not part of the solution. Even in a production environment this split would have been a problem because a busy system with many users might still need a new task added from time to time (think AMZN's MTurk) and forcing people to reload the front end to work on a new task defies the whole reason for using React.
It all was a formula for getting a 20 person team to be spinning its wheels, struggling to meet customer requirements and keeping our recruiters busy replacing developers that were getting burnt out.
I've built several generations of my own train-and-filter system since then and the latest one is HTMX powered. Each task is written once on the back end. My "build" process is click the green button on the IDE and the server boots in a second or two. I can add a new task and be collecting data in 5-10 minutes in some cases, contrasted to the "several people struggling for 5 days" that was common with the old system. There certainly are UIs that would be hard to implement with HTMX, but for me HTMX makes it possible to replace the buttons a user can choose from when they click a button (implement decision trees), make a button get "clicked" when a user presses a keyboard button and many other UI refinements.
I can take advantage of all the widgets available in HTML 5 and also add data visualizations based on d3.js. As for speed, I'd say contemporary web frameworks are very much "blub"
On my tablet via tailscale with my server on the wrong end of an ADSL connection I just made a judgement and timed the page reload in less than a second with my stopwatch. On the LAN the responsiveness is basically immediate, like using a desktop application (if the desktop application wasn't always going out to lunch and showing a spinner all the time.)
Thanks for the reminder, I've been meaning to try it out. Just to get started, I asked ChatGPT to write an htmx app to show a 10-day weather forecast.
It described the general steps and seemed to be able to describe how htmx works pretty well, including hx-get and hx-target, etc., but then said "As an AI language model, I am not able to write full applications with code".
I replied "do the same thing in bash" (which I knew would be different in significant ways, but just to check) and it provided the code.
I wonder, is this a function of recency of htmx or something else? Do other htmx developers encounter this? I imagine it's at least a little bit of a pain for these boilerplate cases, if it's consistent vs. access to the same GPT tooling for other languages.
Frontend developers don’t want to write HTML nor augmented HTML. They want to write code, and these days that means JS. Frontend developers want to make good money (like those backend developers or even infrastructure developers who are working with data, servers, and cool programming languages), hence they need to work with complex libraries/frameworks (if you just write HTM?, you don’t get to earn much money because anyone can write HTM?).
Hell, the term “frontend developer” exists only because they are writing JS! Tell them it’s better to write HTM?, and you are removing the “developer” from their titles!
Same reason why backend developers use K8s. There’s little money on wiring together bash scripts.
Now, if you’re working on your side project alone, then sure HTMX is nice.
htmx has provided the greatest satisfaction and production of my 30-year programming career. After a year of constant development experience with it, I am confident that this is the proper method of building web applications. It truly is how HTML should have evolved.
How's it not a SPA, if you're updating the DOM in JS without a full page reload?
Sorry, I read a load of stuff about React, before I came to any explanation of HTMX. Turns out, it's loading fragments of HTML into the DOM (without reload), instead of loading fragments of JSON, converting them to HTML fragments client-side, and injecting the resulting HTML into the DOM (without reload).
So I stopped reading there; perhaps the author explained why HTMX solves this at the end (consistent with the general upside-down-ness), but the "is the future" title was also offputting, so excuse me if I should have read the whole article before commenting.
I never bought into the SPA thing. SPAs destroy the relationship between URLs and the World Wide Web.
I don't know. The tabs example on the htmx page is perceptibly slow to me. Making a rest call every time I switch a tab, each time sending 90% of the same html skeleton data over the wire feels like a sin to me. Returning html from my api also feels like a sin.
"You can use whatever programming language you like to deliver HTML, just like we used to."
Is this suggesting writing any language we want in the browser? I have wondered for a couple decades why Python or some other open source scripting language wasn't added to browsers. I know Microsoft supported VBScript as an alternative to JavaScript in Internet Explorer and had it not been a security nightmare (remember the web page that would format your hard drive, anyone?) and not a proprietary language it might have a rival to JavaScript in the browser. In those days it wouldn't have taken much to relegate JavaScript to non-use. Today we just get around it by compiling to WASM.
htmx got a lot of good press (deservedly) but I think somehow it needs to get to the next step beyond the basic hypermedia evangelism. I don't know exactly what that step needs to be, because I don't know what a fully "htmx-ed" web would look like. It is promising, but that promise must be made more concrete.
A conceptual roadmap of where this journey could take us and, ideally, some production quality examples of solving important problems in a productive and fun way would increase the fan base and mindshare. Even better if it show how to solve problems we didn't know we had :-). I mean the last decade has been pretty boring in terms of opening new dimensions.
You know, nobody likes this argument, but desktop is still just better. Yeah, yeah, the updates, security issues, I get it, but the tools are simple better, render faster, better functionality/complexity ratio, less gnashing of teeth.
Some comments were deferred for faster rendering.
obpe|2 years ago
For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.
Also, there was a push to move the shitty code from the server to the client to free up server resources and prevent your servers from ruining the experience for everyone.
We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.
brushfoot|2 years ago
I tried using Angular in 2019, and it nearly sank me. The dependency graph was so convoluted that updates were basically impossible. Having a separate API meant that I had to write everything twice. My productivity plummeted.
After that experience, I realized that what works for a front-end team may not work for me, and I went back to MPAs with JavaScript sprinkled in.
This year, I've looked at Node again now that frameworks like Next offer a middle ground with server-side rendering, but I'm still put off by the dependency graphs and tooling, which seems to be in a constant state of flux. It seems to offer great benefits for front-end teams that have the time to deal with it, but that's not me.
All this to say pick the right tool for the job. For me, and for teams going fuller stack as shops tighten their belts, that's tech like HTMX, sprinkled JavaScript, and sometimes lightweight frameworks like Alpine.
recursivedoubts|2 years ago
hypermedia isn't ideal for everything[1], but it is an interesting & useful technology and libraries like htmx make it much more relevant for modern development
we have a free book on practical hypermedia (a review of concepts, old web 1.0 style apps, modernized htmx-based apps, and mobile hypermedia based on hyperview[2]) available here:
https://hypermedia.systems
[1] - https://htmx.org/essays/when-to-use-hypermedia/
[2] - https://hyperview.org/
redonkulus|2 years ago
We've moved back to an MPA structure with decorated markup to add interactivity like scroll views, fetching data, tabs and other common UX use cases. If you view the source on yahoo.com and look for "wafer," you can see some examples of how this works. It helps to avoid bundle size bloat from having to download and compile tons of JS for functionality to work.
For a more complex, data-driven site, I still think the SPA architecture or "islands" approach is ideal instead of MPA. For our largely static site, going full MPA with a simple client-side library based on HTML decorations has worked really well for us.
aidenn0|2 years ago
This is a necessity as long as latencies between the client and server are large enough to be perceptible to a human (i.e. almost always in a non-LAN environment).
[edit]
I also just noticed:
> ...these applications will be unusable & slow for those on older hardware or in locations with slow and unreliable internet connections.
The part about "slow and unreliable internet connections" is not specific to SPAs If anything a thick client provides opportunities to improve the experience for locations with slow and unreliable internet connections.
[edit2]
> If you wish to use something other than JavaScript or TypeScript, you must traverse the treacherous road of transpilation.
This is silly; I almost exclusively use compiled languages, so compilation is happening no matter what; targeting JS (or WASM) isn't that different from targeting a byte-code interpreter or hardware...
--
I like the idea of HTMX, but the first half of the article is a silly argument against SPAs. Was the author "cheating" in the second half by transpiling clojure to the JVM? Have they tested their TODO example on old hardware with an unreliable internet connection?
michaelchisari|2 years ago
A highly complex stock-trading application should absolutely not be using Htmx.
But a configuration page? A blog? Any basic app that doesn't require real-time updates? Htmx makes much more sense for those than React. And those simple needs are a much bigger part of the internet than the Hacker News crowd realizes or wants to admit.
If I could make one argument against SPA's it's not that they don't have their use, they obviously do, it's that we're using them for too much and too often. At some point we decided everything had to be an SPA and it was only a matter of time before people sobered up and realized things went too far.
mtlynch|2 years ago
The thing that keeps holding me back from htmx is that it breaks Content Security Policy (CSP), which means you lose an effective protection against XSS.[0] When I last asked the maintainer about this, the response was that this was unlikely to ever change.[1]
Alpine.js, a similar project to htmx, claims to have a CSP-compatible version,[2] but it's not actually available in any official builds.
[0] https://htmx.org/docs/#security
[1] https://news.ycombinator.com/item?id=32158352
[2] https://alpinejs.dev/advanced/csp
[3] https://github.com/alpinejs/alpine/issues/237
dfabulich|2 years ago
This article makes its case about Htmx, but points out that its argument applies equally to Hotwired (formerly Turbolinks). Both Htmx and Hotwired/Turbolinks use custom HTML attributes with just a little bit of client-side JS to allow client-side requests to replace fragments of a page with HTML generated on the server side.
But Turbolinks is more than ten years old. React was born and rose to popularity during the age of Turbolinks. Turbolinks has already lost the war against React.
The biggest problem with Turbolinks/Htmx is that there's no good story for what happens when one component in a tree needs to update another component in the tree. (Especially if it's a "second cousin" component, where your parent component's parent component has subcomponents you want to update.)
EDIT: I know about multi-swap. https://htmx.org/extensions/multi-swap/ It's not good, because the onus is on the developer to compute which components to swap, on the server side, but the state you need is usually on the client. If you need multi-swap, you'll find it orders of magnitude easier to switch to a framework where the UI is a pure function of client-side state, like React or Svelte.
Furthermore, in Turbolinks/Htmx, it's impossible to implement "optimistic UI," where the user creates a TODO item on the client side and posts the data back to the server in the background. This means that the user always has to wait for a server round trip to create a TODO item, hurting the user experience. It's unacceptable on mobile web in particular.
When predicting the future, I always look to the State of JS survey https://2022.stateofjs.com/en-US/libraries/front-end-framewo... which asks participants which frameworks they've heard of, which ones they want to learn, which ones they're using, and, of the framework(s) they're using, whether they would use it again. This breaks down into Awareness, Usage, Interest, and Retention.
React is looking great on Usage, and still pretty good on Retention. Solid and Svelte are the upstarts, with low usage but very high interest and retention. Htmx doesn't even hit the charts.
The near future is React. The further future might be Svelte or Solid. The future is not Htmx.
vp8989|2 years ago
2) The missing piece is how you can achieve this "collapsing" back of functionality into single SSR deployable(s) while still preserving the ability to scale out a large web application across many teams. Microfrontends + microservices could be collapsed into SSR "microapplications" that are embedded into their hosting app using iframes?
rektide|2 years ago
I see a lot of resemblance to http://catalyst.rocks with WebComponents that target other components. I think there's something unspoken here that's really powerful & interesting, which is the declarativization of the UI. We have stuff on the page, but making the actions & linkages of what does what to what has so far been trapped in code-land, away from the DOM. The exciting possibility is that we can nicely encode more of the behavior into the DOM, which creates a consistent learnable/visible/malleable pattern for wiring (and rewiring) stuff up. It pushes what hypermedia can capture into a much deeper zone of behaviors than just anchor-tag links (and listeners, which are jump points away from the medium into codespace).
diegof79|2 years ago
In my opinion, the future of the web as a platform is about viewing the web browser as an operating system with basic composable primitives.
HTMLX adds attributes to HTML using JS, and the argument about "no-JavaScript" is misleading: with HTMLX you can write interactions without JS, but HTMX uses JS. But, as it forces you to use HTML constructs that will work without scripts (such as forms), the page will fall back. It doesn't means that the fallback is usable.
The custom HTMLX attributes work because the browser supports extensions of its behavior using JS. If we add those attributes to the standard HTML, the result is more fragmentation and an endless race. The best standard is one that eliminates the need for creating more high-level standards. In my view, a possible evolution of WASM could achieve that goal. It means going in the opposite direction of the article, as clients will do more computing work. In a future like that, you can use HTMLX, SwiftUI, Flutter, or React to develop web apps. The biggest challenge is to balance a powerful OS-like browser like that with attributes like searchability, accessibility, and learnability (the devtools inspect and console is the closest thing to Smalltalk we have today)...even desktop OSs struggle today to provide that.
majormajor|2 years ago
I've been on the sidelines for the better part of a decade for frontend stuff, but I was full-stack at a tiny startup in 2012ish that used Rails with partial fragments templates for this. It needed some more custom JS than having a "replacement target" annotation everywhere, but it was pretty straightforward, and provided shared rendering for the initial page load and these updates.
So, question to those who have been active in the frontend world since then: that obviously failed to win the market compared to JS-first/client-first approaches (Backbone was the alternative we were playing with back then). Has something shifted now that this is a significantly more appealing mode?
IIRC, one of the big downsides of that "partial" approach in comparison with SPA-approaches was that we had to still write those JSON-or-XML-returning versions of the endpoints as mobile clients became more prevalent. That seems like it would still be an issue here too.
0xbadcafebee|2 years ago
At this rate, when I'm 80 years old we will still be fucking around with these stupid lines of code, hunched over, ruining our eyesight, becoming ever more atrophied, all to make a fucking text box in a monitor pop some text into a screen on another monitor somewhere else in the world. It's absolutely absurd that we spend this much of our lives to do such a dumb thing, and we've been iterating on it for five decades, and it's still just popping some text in a screen, but we applaud ourselves that we're so advanced now because something you can't even see is doing something different in the background.
pkelly|2 years ago
A lot of the comments here seem to have the approach that there is a single best stack for building web applications. I believe this comes from the fact that as web engineers we have to choose which tech to invest our careers in which is inherently risky. Spend a couples years on something that becomes defunct and it feels like a waste. Also, startup recruiters are always looking for the tech experience that matches the choice of their companies. VCs want to strike while the iron is hot.
Something that doesn't get talked about enough (which the author does mention near the end of article) is that different web apps have different needs. There is 100% a need for SPAs for certain use cases. Messaging, video players, etc. But there are many cases where it is overkill, like the many many CRUD resource apps I've built over the years. Say you have a couple hundred users that need to manage the state of a dozen interconnected resources. The benefits of an MPA are great here. Routing is free, no duplication of FE / BE code. Small teams of devs can ship code and fix bugs very fast which keeps the user feedback loop tight.
poidos|2 years ago
MattyRad|2 years ago
> without the annoying full-page load refresh.
This fixation on the page refresh needs to stop. Nearly every single website which has purportedly "saved" page refreshes has brutalized every other aspect of the UX.
This is a good article, and I agree that Htmx brings sanity back to the frontend, but somewhere along the line frontend folks got it in their head that page refreshes were bad, which is incorrect for essentially all CRUD / REST APIs. Unless you're specifically making a complex application that happens to be served through the web, like Kibana or Metabase, then stop harping on page refreshes.
Even this article calls it the annoying refresh. Not the impediment refresh, or the derisive refresh, or the begrieved refresh. Moreover, what exactly is annoying about page refreshes? That there's a brief flash? That it takes ~0.3 seconds to completely resolve?
Users don't care about page refreshes, and in fact they are an indication of normalcy. Upending the entire stack and simultaneously breaking expected functionally to prevent them is madness.
The killer feature of Htmx is that it doesn't upend the entire stack, and you can optimize page refreshes relatively easily. That's great! But even then I'm still not convinced the tradeoff is worth it.
jmull|2 years ago
I'm not seeing it. SPAs can be overly complex and have other issues, but I'm not seeing HTMX as a particular improvement.
Also, a bunch of this article doesn't make sense to me.
E.g, one of the listed costs of SPAs is managing state on the client and server... but (1) you don't have to -- isn't it rather common to keep your app server stateless? -- and (2) HTMX certainly allows for client-side and server-side state, so I'm not sure how it's improving things. That is, if you want to carefully manage app state, you're going to need a mechanism to do that, and HTMX isn't going to help you.
It also doesn't somehow prevent a rats nest of tooling or dependencies. It isn't an application framework, so this all depends on how to solve that.
SPA's also aren't inherently "very easy to make [...] incorrectly".
Also, the suggested HTMX approach to no browser-side javascript is very crappy. Your app would have to be very specifically designed to not be utterly horrible w/o JS with such an approach and instead be just pretty horrible. There are just so much more straightforward ways to make apps that work well without JS. Also, this isn't exactly a mainstream requirement in my experience.
I could go on and on. "caching" - htmx doesn't address the hard part caching. "seo-friendliness" - Like all the benefits here attributed to htmx, htmx doesn't particularly help with this and there are many other available way to achieve it.
IDK. These kinds of over-promising hyped up articles give me the feeling the thing being hyped up probably doesn't have a lot of real merit to be explored or else they'd talk about that instead. It also feels dishonest to me, or at least incompetent, so make all of these claims and assertions that aren't really true or aren't really especially a benefit of htmx vs many numerous other options.
Veuxdo|2 years ago
I mean, aren't these baseline "get computers to do stuff" things?
optymizer|2 years ago
fogzen|2 years ago
HTMX is cool. HTMX may fit your needs. But it’s not enough for providing the best possible user experience.
chrsjxn|2 years ago
But I'll be honest. I'll believe it when I see it. It's not that htmx is bad, but given the complexity of client-side interactions on the modern web, I can't see it ever becoming really popular.
Some of the specifics in the comparisons are always weird, too.
> Instead of one universal client, scores of developers create bespoke clients, which have to understand the raw data they fetch from web servers and then render controls according to the data.
This is about client side apps fetching arbitrary JSON payloads, but your htmx backend needs to do the same work, right? You have to work with the raw data you get from your DB (or another service) and then render based on that data.
You're still coupled to the data, and your htmx endpoint is just as "bespoke" as the client code which uses it. It's not wrong to prefer that work be done on the server instead of the client, or vice versa, but we're really just shuffling complexity around.
iamsaitam|2 years ago
These kind of takes fall in the bullseye of "I don't want to program with Javascript". The subtext is all about this.
Perhaps.. maybe.. Htmx won't be the future because there are a lot of people that like programming in Javascript?
Pet_Ant|2 years ago
I've seen this architectures quickly ruined by 'can-do' people who butcher everything to get a feature done _and_ get a bonus from the management for quick delivery.
tacone|2 years ago
I don't see the point by the way, I think htmlx is here to stay and a good choice for many, but it's clearly not a silver bullet. You make decently fast UIs, not blazing fasts, there are no (proper) offline first apps with htmlx, caching is likely more difficult or impossible sometimes and the load for your server is inevitably greater (of course it could be more than acceptable in some cases, so why not?), that also means more bandwidth for your cloud provider as opposed for you cdn. You will still have to write javascript sooner or later.
It depends on what you're doing. Nothing is aprioristic ly "the future", the future is "the future", and it has yet to come.
BeefySwain|2 years ago
If anyone is looking to discuss making Hypermedia Driven Applications with HTMX in Python, head over to the discussions there!
tgbugs|2 years ago
xutopia|2 years ago
s1k3s|2 years ago
The future is whatever works best for your use-case.
lvh|2 years ago
So, I posit that the churn, while definitely real, is not actually intrinsic.
Right now, at Latacora, we're writing a bunch of Clojure. That includes Clerk notebooks, some of which incorporate React components. That's an advantage I think we shouldn't ignore: not needing to write my own, say, Gantt chart component, is a blessing. So, specifically: not only do I think the churn is incidental to the problem, I don't even believe you need to give up compatibility to get it.
Fun fact: despite all of this, a lot of what we're writing is in Clerk, and while that's still fundamentally an SPA-style combination of frontend and backend if you were to look at the implementation, it absolutely _feels_ like an htmx app does, in that it's a visualization of your backend first and foremost (React components notwithstanding).
jdthedisciple|2 years ago
Imagine 10s of thousands of clients requesting millions of HTML fragments be put together by a single server maintaining all the states while all the powerful high end computing power at the end user's fingertips goes completely to waste.
Not convinced.
tabtab|2 years ago
Bloated JS frameworks like Angular, React, Vue, and Electron have big learning curves and a jillion gotcha's because they have to reinvent long-known and loved GUI idioms from scratch, but DOM is inherently defective for that need, meant for static documents. There are just too many GUI needs that HTML/DOM lacks or can't do right: https://www.reddit.com/r/CRUDology/comments/10ze9hu/missing_...
Let's byte the bullet and create a GUI markup standard. Perhaps base it off Tk or Qt kits to avoid starting from scratch.
peter_retief|2 years ago
The concept is great but why has it taken so long?
https://unpoly.com/
lucidguppy|2 years ago
Backend engineers are now able to write management tools and experimental products faster - and then pass the winning products off to a fluttr team to code for all environments. The backend could be converted into a django rest api if the code is properly refactored.
incrudible|2 years ago
Moreover, REST APIs - and I mean the simple ones people actually want to use, none of that HATEOAS BS - are ubiquitous for all sorts of interactions between web and nonweb clients. Are you going to ship an MPA as your mobile apps, or are you going to just use REST plus whatever clients make sense?
It also makes a lot of sense in terms of organization. Your backend developers probably suck at design, your frontend developers suck at databases.
janosd|2 years ago
Alifatisk|2 years ago
kurtextrem|2 years ago
However, I'm not sure if this is actually a problem or rather depends on how much interaction the user does (so where is the "turning point" of the overhead of having all in the bundle vs full HTML responses). What does everyone think?
aigoochamna|2 years ago
With that being said, I imagine it would become unmaintainable very quickly. The problems htmx is solving are better solved with other solutions in my opinion, but I do think there's something that can be learned or leveraged with the way htmx goes about the solution.
anyonecancode|2 years ago
Certainly it's possible to take on that burden and execute it well, but I think a lot of teams and businesses don't fully account for the fact that they are doing so and properly deciding if that extra burden is really necessary. The baseline for nailing performance and correctness is higher with an SPA.
yellowapple|2 years ago
hu3|2 years ago
AtlasBarfed|2 years ago
I think what is needed is to recognize that the SPA architecture isn't actually just a view processor. IMO it is a very shitty designed:
View rendered <--> client process <--> server process
So it seems that SPA apps load an absolute mountain of javascript into the view (the tab/page) and then that starts (crudely IMO) running as client-side daemon tracking messy state and interfacing with local storage, with javascript (opinion: yuck) ferreted away in a half dozen divs.
IMO, what has been needed since you have local storage and local session state and all that is ... a client daemon that the web page talks to that offers data services, and then that client daemon if it needs server data calls to the internet.
That way local state tracking, transformation, and maintenance can be isolated away from the code of the view. Large amounts of javascript (or maybe all with CSS wizardry is dropped). The "client daemon" can be coded in webassembly, so you aren't stuck with javascript (opinion: yuck).
You can even have more efficient many views/tabs interfacing with the single client daemon, and the client daemon can track and sync data between different tabs/views/windows.
Now, of course that is fucking ripe as hell for abuse, tracking. Not sure how to solve it.
But "separation of concerns" in current web frameworks is a pipe dream.
divan|2 years ago
Luckily, after 3 decades, there is some sobering realization that typesetting engine is not a good foundation for modern apps. https://news.ycombinator.com/item?id=34612696
Web development without HTML/CSS/JS is the future.
mal-2|2 years ago
When I compare this to Phoenix LiveView I much prefer LiveView, because it both provides the markup templating engine and tracks the meaning of the relationship, with server-side tokens and methods.
nine_k|2 years ago
There's no "the future" in this area, because demands are very different; a heavily interactive SPA like GMail or Jira has requirements unlike an info page that needs a few bits of interactivity, etc.
branko_d|2 years ago
Yes you could make a CORBA or DCOM object almost indistinguishable from a local object, except for the latency when it was actually remote. And since it looked like a normal object it encountered “chatty” interface which exacerbated the latency cost.
Htmlx seems pretty chatty to me, which I’m sure works OK over the LAN, but what about the “real” internet?
thomasreggi|2 years ago
auct|2 years ago
The js-ajax-button has similar approach. Add class to button that have data-url and it will make request to it. This is small func I use, but with uajax is so powerful, I don't need react or htmx.
But it is hard to sell something that eliminates using javascript.
honkycat|2 years ago
Talk about the positives of YOUR approach, don't tear down a different approach that half the industry is using. You're not going to say anything new or interesting to the person you are trying to convince this way. Experienced engineers already know the trade-offs between an SPA and a server rendered experience.
w10-1|2 years ago
Ideas aside, the web app future belongs to those with the resources to sustain a response, or those who can restore the ability to capture/monetize developers and users in a closed system.
The scope of web apps is broad enough that many technologies arguably have their place. The open javascript ecosystem reduced the cost of creating candidates, but has no real mechanism to declare winners for the purpose of consolidating users, i.e., access to resources.
Careers and companies are built on navigating this complexity, but no one really has the incentive to reduce it unless they can capture that value.
I really appreciate Cloudflare because they are open about both their technology and their business model. They thus offer a reasonable guarantee that they can sustain their service and their technology, without basing that guarantee on the fact that they are a biggie like AWS, Microsoft, or Google (i.e., eating their own dog food, so we can join them at the trough).
The biggest cost in IT is not development or operating fees but reliance and opportunity.
jimmaswell|2 years ago
Relevant code:
https://github.com/ldyeax/jimm.horse/blob/master/j/j.php
https://github.com/ldyeax/jimm.horse/blob/master/j/component...
The JS would be a bit more elegant if script tags didn't need special handling to execute on insertion.
The experience is very seamless this way - I'm very pleased with it. It's live at https://jimm.horse - the dynamic behavior can be found clicking on the cooking icon or N64 logo.
On reading the article, I'll definitely make use of this if it becomes well-supported. It does exactly what I wanted here.
themaximalist|2 years ago
The LivewView/Hotwire/LiveWire way of building applications make a really great tradeoff—the ease of building websites with the speed and power of webapp UX.
I wanted something simple to use with Express and it's been very productive.
There's a few things to get used to, but overall like it and plan to keep using it in my projects.
PaulHoule|2 years ago
The build for the system took about 20 minutes, and part of the complexity was that every new task (form where somebody had to make a judgement) had to be built twice since both a front end and back end component had to be built so React was part of the problem and not part of the solution. Even in a production environment this split would have been a problem because a busy system with many users might still need a new task added from time to time (think AMZN's MTurk) and forcing people to reload the front end to work on a new task defies the whole reason for using React.
It all was a formula for getting a 20 person team to be spinning its wheels, struggling to meet customer requirements and keeping our recruiters busy replacing developers that were getting burnt out.
I've built several generations of my own train-and-filter system since then and the latest one is HTMX powered. Each task is written once on the back end. My "build" process is click the green button on the IDE and the server boots in a second or two. I can add a new task and be collecting data in 5-10 minutes in some cases, contrasted to the "several people struggling for 5 days" that was common with the old system. There certainly are UIs that would be hard to implement with HTMX, but for me HTMX makes it possible to replace the buttons a user can choose from when they click a button (implement decision trees), make a button get "clicked" when a user presses a keyboard button and many other UI refinements.
I can take advantage of all the widgets available in HTML 5 and also add data visualizations based on d3.js. As for speed, I'd say contemporary web frameworks are very much "blub"
http://www.paulgraham.com/avg.html
On my tablet via tailscale with my server on the wrong end of an ADSL connection I just made a judgement and timed the page reload in less than a second with my stopwatch. On the LAN the responsiveness is basically immediate, like using a desktop application (if the desktop application wasn't always going out to lunch and showing a spinner all the time.)
themodelplumber|2 years ago
It described the general steps and seemed to be able to describe how htmx works pretty well, including hx-get and hx-target, etc., but then said "As an AI language model, I am not able to write full applications with code".
I replied "do the same thing in bash" (which I knew would be different in significant ways, but just to check) and it provided the code.
I wonder, is this a function of recency of htmx or something else? Do other htmx developers encounter this? I imagine it's at least a little bit of a pain for these boilerplate cases, if it's consistent vs. access to the same GPT tooling for other languages.
tkiolp4|2 years ago
Hell, the term “frontend developer” exists only because they are writing JS! Tell them it’s better to write HTM?, and you are removing the “developer” from their titles!
Same reason why backend developers use K8s. There’s little money on wiring together bash scripts.
Now, if you’re working on your side project alone, then sure HTMX is nice.
silver-arrow|2 years ago
denton-scratch|2 years ago
Sorry, I read a load of stuff about React, before I came to any explanation of HTMX. Turns out, it's loading fragments of HTML into the DOM (without reload), instead of loading fragments of JSON, converting them to HTML fragments client-side, and injecting the resulting HTML into the DOM (without reload).
So I stopped reading there; perhaps the author explained why HTMX solves this at the end (consistent with the general upside-down-ness), but the "is the future" title was also offputting, so excuse me if I should have read the whole article before commenting.
I never bought into the SPA thing. SPAs destroy the relationship between URLs and the World Wide Web.
kubota|2 years ago
mikece|2 years ago
Is this suggesting writing any language we want in the browser? I have wondered for a couple decades why Python or some other open source scripting language wasn't added to browsers. I know Microsoft supported VBScript as an alternative to JavaScript in Internet Explorer and had it not been a security nightmare (remember the web page that would format your hard drive, anyone?) and not a proprietary language it might have a rival to JavaScript in the browser. In those days it wouldn't have taken much to relegate JavaScript to non-use. Today we just get around it by compiling to WASM.
nologic01|2 years ago
A conceptual roadmap of where this journey could take us and, ideally, some production quality examples of solving important problems in a productive and fun way would increase the fan base and mindshare. Even better if it show how to solve problems we didn't know we had :-). I mean the last decade has been pretty boring in terms of opening new dimensions.
Just my two cents.
jksmith|2 years ago