The refrain against the "we should go back to MPA apps with server rendered HTML" is often "well what about Figma and Photoshop", which of course, yes those don't really work in the MPA, server rendered HTML model.
The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites because it sounds sexier and cooler to work on something complex than something simple. The phrase becomes "well, what about Figma and Photoshop (and my mostly CRUD SaaS)?"
I think a valuable insight that the MPA / minimal JS crowd is bringing to the table is the idea is that you shouldn't strive for cool and complicated tools, you should strive for the simplest tool possible, and even further, you should strive to make solutions that require the simplest tools possible whenever you can.
This is motte-and-bailey argumentation in my opinion.
The motte: SPAs are a good way to write highly complex applications in the browser, like Photoshop and Figma, to compete with desktop apps.
The bailey: SPAs are a good way to write most web applications.
If you attack the bailey, proponents retreat to the motte, which is hard to disagree with. With the motte successfully defended, proponents return to the bailey, beneficial for those enthusiastic about SPAs but much harder to defend.
The only way to tease this issue apart is to stick to specifics and avoid casting SPAs or MPAs as universally good or bad. Show me the use-case and we can decide which route is best.
That’s not why. In my experience, applications accumulate interactivity over time. At some point, they hit a threshold where you (as a developer— not an end user) wish you had gone with an interactive development model.
Also, for me, the statically typed, component-based approach to UI development that I get with Preact is my favorite way to build UIs. I’ve used Rails, PHP, ASP (the og), ASP.NET, ASP.NET MVC, along with old-school native windows development in VB6, C# Winforms, whatever garbage Microsoft came up with after Winforms (I forget what it was called), and probably other stacks I’m forgetting. VB6 and C# Winforms were the peak of my productivity. But for the web, the UI model of Preact is my favorite.
Agree, too many believe in the silver-bullet that solves all their problems. Different problems require different solutions, it's kind of simple but hard to realize when you're deep into the woods.
If you want to build a vector editor in the browser then yes, probably you want to leverage webassembly, canvas, webgl or whatever you fancy.
But if you're building a more basic application (like most CRUD SaaS actually are) then probably you don't want to over-complicate it, as you instead want to be able to change and iterate quickly, so simplest tools and solutions gives you the most velocity for changes, until you've figured out the best way forward.
Trouble is to recognize where on the scale of "Figma <> news.ycombinator.com" you should be, and it's hard to identify exactly where the line gets drawn where you can justify upfront technical innovation in favor of true-and-tested approaches.
From my brief look at my log and history usage and generally my estimate, 95%, or dare I say 99% in terms of my traffic could be a MPA. Currently the only site I go regularly that are JS SPAs are Feedly, Youtube, Discourse Forums and Twitter. And apart from Twitter the others could have been MPAs and still be perfectly fine. ( Although Youtube is debatable ) I did like to think 80-90% of the web population browsing usage dont deviate from mine that much.
The thing about JS SPA is that they are hard to make it 100% right. Even the simplest thing. And this goes back to the topic about Web Development and computing. Modern day web is designed by Google for Google. Making things easier for 98% of the web simply isn't their thing. And that is not just on the Web but everything else they do as well. And since no one gets fired for using what Google uses, we then end up with additional tools to solve the complexity of what Google uses.
Depending on how you count it we are now fast coming close to 20 years of Google dominance on the web. And there hasn't been a single day I didn't wish for an alternative to compete with them. I know this sounds stupid. But may be I should start another Yahoo.
> you should strive to make solutions that require the simplest tools possible whenever you can
I’ve gone back to making MPA apps with minimal JS. It helps me actually ship my projects rather than tinkering and having an over complicated setup for mostly CRUD tasks.
In one project that is a bit more data intensive and interactive I’m using Laravel Breeze / Laravel + inertajs (SSR react pages).
I’m also a big fan of Jekyll lately, I made my own theme on Thursday with only 2 tiny scripts for the mobile menu and submission of the contact form.
Using DOM APIs and managing a little bit of state is fine for many, many projects.
OTOH when you don’t control the requirements and the business asks for a ton of stateful widgets progressive enhancement can become a mess of spaghetti in the UI and API unless very carefully managed and well thought out. At that point you might as well go all in on React/Angular/Vue, especially when you have to account for a mix of skill levels and turnover.
> The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites
It is not only Figma or Photoshop. Any site with multiple steps of interactions or complex filters over search result etc. benefit from SPA and declarative code. The experience is smoother and development of anything, but simple forms is much faster.
People disabling JS or working on satellite internet from a remote island are fringe cases and are not relevant for the business.
> you should strive for the simplest tool possible, and even further, you should strive to make solutions that require the simplest tools possible whenever you can.
Why do you believe this? I couldn’t disagree more. People should strive for the most effective tool, and most of the time that’s what they already know, unless some new tools’ efficacy outweighs cost to learn it
"the problem" "you should" – this is the language of special interests
developers are salarymaxxing first, second virtue signaling to support their case in their employers' selection process, third work-minimization and pain-minimization. Even the Simplicity Paladins are min/maxxing the same three priorities, perhaps weighing pain-minimization above salarymaxxing, yet still subject to the same invisible macro forces that shape our lives. and I postulate that this is a complete explanation of developer behavior at scale.
> The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites because it sounds sexier and cooler to work on something complex than something simple.
This is very similar to the NoSQL arc. Some people at prestigious places posted about some cool problems they had, and a generation of inexperienced developers started that they needed MongoDB and Cassandra to build a CRUD app with several orders of magnitude fewer users, transactions, or developers. One of the biggest things our field needs to mature on is the idea of focusing on the problem our users have rather than what would look cool when applying for a new job.
The SPA obsession has been frustrating that way for me because I work with public-focused information-heavy sites where the benefits are usually negative and there’s a cost to users on older hardware – e.g. the median American user has JavaScript performance on par with an iPhone 6S so not requiring 4MB of JS to display text and pictures has real value – but that conflicts with hiring since every contractor is thinking about what’ll sound “modern” on their CV.
"Keep it simple, stupid!", is a design principle first noted by the U.S. Navy in 1960 [0]
... but some coders, including yours truly, has been brought up with that principle as a keystone of programming from day one (which was decades ago). It is related to the more modern DRY principle.
If this is brought to the table now it is only seemingly so, caused by the fact that those at the table must have forgot it, or never learned it. Of course, there are also commercial interests in keeping things as complicated as possible - it could just be that these have had too much influence for too long.
I think it's (increasingly) not as binary as either MPA or SPA. Although it has been for quite some time now.
A lot of web developers strive for some amount of templating and client-side interactivity on their websites. And when frameworks like React came up they solved interactivity issues but made it hard to integrate into existing server-side templating systems, which were mostly using different programming languages.
So because integrating the frameworks for client-side interactivity was hard, the frameworks also took on the job of templating the entire site and suddenly SPAs were popular. I think a big draw here was that the entire tooling became JavaScript.
But the drawbacks were apparent, I guess a big one was that search engines could not index these sites and of course performance, so the frameworks got SSR support. The site was still written in the framework, rendered to HTML on the server and then hydrated back to a SPA on the client.
Now, even more recently we got stuff like islands, where you still use the handy web framework but can choose which parts of your site should actually be interactive (i.e. hydrated) on the client. And I believe this is the capability that has just long been missing. Some sites require no JS on the client (could even be SSGs), others require a little interactivity, and some make most sense as full blown SPAs.
We're finally entering the era where the developer has that choice even though they use the same underlying framework.
When you vote on a HN comment while writing a reply, it reloads and you lose your reply. That's the kind of problem you have with MPAs, even if you aren't building the next Figma.
Simplest feels like a folly. No project of significance stays in a simple phase. They all grow and expand.
Having a stable reliable generally applicable decision/toolset you can apply beats this hunt for optimization to smithereens. Don't optimize case by case. Optimize for your org, for your life, lean into good tools you can use universally and stop special casing your stuff. There's nearly no reason to complicate things by hounding for "simplicity." Other people won't be happier if you keep doing side quests for simple, and you won't be either.
(Do learn to get good with a front-end router, so you can avoid >50% of the practical downsides of SPAs. And I hope over time I can recommend WebComponents as a good-for-all un-framework.)
The core of what differentiates applications isn't what happens on the front end. Putting all the focus on the client which gets delivered seems like a misappropriation of funds.
Especially as one man teams it just doesn't make sense. Any on teams with multiple people having relatively static HTML is a really effective abstraction.
I, for one, don't want them rendered in my browser. I have an OS that can run apps, and I want my browser to be an app that renders simple HTML pages. If you want an app, make a damn Desktop app that can run on my OS.
For stuff like figma and photoshop I can't help but suspect that the creators would be better of writing their program in CPP with the GUI toolkit of their choice, and compiling it for the web with emscripten.
Last month a client asked me to build them a crud form only using old school C# MVC with Razor templates. And by golly it was much harder for me than just doing it in React+Next.
I had some nostalgic notion that MVC was going to be a smooth ride. That all this JS cruft was slowing me down. Then I needed a search bar, then validation, then I kept running into weird surprises with Razor templates. After a month I ruefully concluded I'd have gotten it done a heck of a lot faster just using my usual stack.
I now regret all those times I complained about how much bloat there is in the JS ecosystem. Yes, is it possible to make crud apps with Razor, sure, people do it everyday. But for me, with a project with increasingly complex display logic and validation, I definitely should have tried a little harder to talk them into using Next and TS.
As an aside, I was surprised that getting HMR with C# is kind of a pain. I never figured out how to get it while debugging. So every time I wanted to debug some new issue with the template not sending data right to the controller (which felt like every few seconds of work) I'd have to restart the server and wait 10+ seconds for it to restart and then renavigate to where I was, reenter my form fields, and then try again. After I was done and wanted HMR, I'd have to restart the server again. That extra hassle really started to grind on my patience.
Everything running in a browser is interpreted. There is no reason for webdev to require a build step, and it largely does so because JS standards haven't delivered anything around static typing. Even a "this syntax is valid but ignored" would enable IDEs to provide checks via LSPs, but for no-build running.
Build steps and development iteration overhead is something to be avoided at all costs. For it to have been introduced, with multi-second latency, to web dev. is a sign of developer experience dropping off a cliff.
Time-to-iterate, tool quality, release speed, etc. are essential to being able to build mental models of code (etc.)
Ease of deploying is a huge thing which gets lost. Over focusing on a single area of an application makes things really obvious the big picture was lost somewhere.
I appreciate the build step in setups like Vite/Vue because the development server can automatically and accurately hotpatch the application when I make changes. I don’t think you’d want to change the standards to couple DOM and js in the way that makes this possible in Vue but it’s an iteration speed improvement nonetheless.
A build step is a huge barrier that makes authoring your own websites require significantly more expretise than it otherwise would. It thus makes web development less accessible, and puts anything even a little bit complicated out of reach of anyone who isn't already an experienced web developer or exceptionally dedicated. It also discourages the slow development of some pet website of a not-primarily-a-web-developer-by-trade into something more featureful, since expanding beyond the point where you can still reasonably avoid a build step suddenly requires acquiring a lot more exprertise and expending a lot of effort all in one go, instead of just being a smooth expansion into a more featureful project.
The simpler it is to make your own website without having extensive web development experience, the more people will be able to have their own website instead of being directed to the endless array of corporate silos like social media (which has largely replaced personal websites) and corporate middlemen (which have largely replaced in-house commercial websites for smaller actors) that take care of the burden of making your own website for you, with some rather obvious downsides.
I mean, a build step is not required to build a website, but I'd say anyone who wants to have more than one HTML page and one stylesheet will probably want some sort of build step sooner rather than later.
Like, if you have two or more pages, you probably want them to share the same header or footer, and unless you want to A) repeat the same markup on each page or B) inject them with a client-side script, you will need some sort of build step. There are more accessible solutions out there, like Hugo.
How else are you going to achieve that? Sure you could use PHP, but I don't see how that is more accessible or maintainable than having a build step.
Anecdote: Coming from the application side i had always thought of C as a low-level language but in one company where i worked with chip designers who only did Verilog, i was gobsmacked when in my conversations with them they said they didn't know higher-level languages like C and cannot program in it.
“Riddled” has a tinge of negativity to it. I would say it’s actually a useful thing, “level” is a count of abstraction layers relative to the abstraction you’re familiar with. It’s more just a way to communicate some personal responsibility/knowledge range. I’ve heard people call Python “a low level way of using a computer” or similar.
> When I worked on animations, I was surprised at how many people believed that some animations “run on the GPU” (the browser can offload some animations to a separate process or thread that updates animations using the GPU to composite or even paint each frame but it doesn’t offload them to the GPU wholesale)
Not to nitpick his nitpick, but...I've said this exact thing in the past, and his parenthesized explanation is what I meant. It's too much of a mouthful to try and be super accurate and specific all the time.
Technology is not a religion and it doesn’t need prophets. Why are people so hell bent on convincing others to join them in their use of whatever technology?
Pick what solves your problem. In the context of web development that overwhelming means using whatever is most popular given your preferred programming language.
> As an example, the Eleventy documentation seems to avoid using client-side JavaScript for the most part. As Eleventy supports various templating languages it provides code samples in each of the different languages. Unfortunately, it doesn’t record which language you’ve selected so if your chosen language is not the default one, you are forced to change tabs on every single code sample. A little client-side JavaScript here would make the experience so much more pleasant for users.
This may actually be an ePrivacy limitation (cookie law) not a desire to avoid JS. Persisting the setting across pages requires client size storage, which in ePrivacy countries requires either that the setting be obvious to the user that it's persisted, getting per-action consent (ex: a "preferred language" setting) or site-wide user consent (a cookie pop-up).
My understanding of ePrivacy (mostly GDPR) is that this kind of feature does not require consent.
It's only features that would allow you for tracking of the user that require consent.
Storing some setting in a local storage, never sending it to the server is fine.
Things get a bit muddy when sending to server but even then you may not need a consent if it is a feature that is required for correct working of the website or better experience without tracking and profiling.
Recently I was reading the Learn CSS the pedantic way book and the definition for inline boxes did not match the way that anonymous block boxes were generated when an inline-level element had a block-level element as its child. So I went looking elsewhere for a more appropriate definition for that case and found this issue on standards: https://github.com/w3c/csswg-drafts/issues/1477 It was really interesting to know that I was not the only one confused. My question was: Does the inline-box generated by the inline-level element contains the box generated by the block-level child or there wasn't an inline-box that was a parent of them all but there were 2 siblings inline-level boxes of the block-level box that were wrapped in another anonymous block boxes? Reading that issue I got to know the concept of fragments, which I did not know browsers had. But the issue seems to suggest that the box tree for this case should have the inline-box as being a parent of the block-box. Which led me to another question, in that case, if I apply a border to the parent inline-level element, shouldn't it apply to the overall box that is generated (it does not)? The answer is that borders between block-boxes and inline-level boxes should not intersect but that is really difficult to derive from reading the standards alone. Anyway it was headache-inducing trying to learn the box-model pedantically :)
I wish I could learn more about layout in browsers and I trying to read the code of LayoutNG in Chromium but I need more aspirin hehehe
I'd like to add another one: I don't need a separate NodeJS (or whatever engine) service to build my service dashboard. Before NodeJS got popular, backend engineers like me simply put web assets into a folder in my web app, so my service will have an admin page or a dashboard for per-node administrations. For some reason, such practice has become a taboo. My engineers have been insisting that they set up a separate NodeJS service just to build even a simple admin page. But I fail to see why. The reasons I got are usually these three: 1) a NodeJS service gives us optimized performance, though techniques like server-side rendering; 2) a separate service is easier to scale; 3) a separate service offers separation of concerns. However, 1) and 2) are premature optimization to me. All I need is standardized per-node admin page for my service. The QPS is probably one per day by a human. Why would I care about SSR or scalability at all? And 3) is quite hand-wavy. On the other hand, the overhead of managing a separate service as well as the dependencies brought by NodeJS' ecosystem seems high.
So, what's wrong with the old way of having embedded web assets in a service for building simple admin pages?
Reminds me of the fact that for a long time C++ compiler engineers didn't themselves write code in C++ or know best practices despite writing the implementation.
> Many didn’t know about new CSS features that had shipped 10 years ago. What’s more, even when we told them about them, they didn’t seem too excited. They were doing just fine with jQuery and WordPress, thank you.
...and if jQuery and WordPress do the job, they are sound technical decisions.
What is not a sound technical decision is forever chasing the latest fashion in technology. Also known as the "oh look a shiny thing development paradigm".
> There are Twitter/X polls, for example, but they tend to only be answered by the Web developers on the bleeding edge and are easily skewed by who spreads the word about the poll.
Maybe MDN should have a comment section, like the PHP docs. That would be more representative.
What does it say about this whole domain when, as the author says, Web Apps and Sites/Blogs (also add in Mobile apps) are so very different from each other using a myriad set of technologies each of which has a learning curve? Where is the uniformity and commonality in all this? Why are developers even perpetuating this?
That said, this might be a good place to ask for recommendations for study since i am not a "Web Developer";
1) Comprehensive books/other sources on full-stack Web App and Site development. Bonus points if they use a single language for frontend/backend/everything else.
Businesses want uniformity and commonality since, in theory, it should lower development costs (see projects like flutter and fuschia, which have the goal to make every platform web based).
The problem is that users/customers have higher expectations for their user experiences than the web can offer on mobile/desktop/etc.
Robinhood, Duolingo, Slack are a few good examples of UX being huge differentiators.
While I agree with "web engine developers and web spec deveopers have little-to-no idea about web development" I disagree with "web browsers are good at handling complex and long-lived DOM trees with dynamic changes now"
They are certainly better than before at handling the memory and bookkeeping of a large DOM tree. Every browser had so many unexplainable little bugs, but nowadays they can be relied on to correctly handle their own internal data without crashing. It’s a huge improvement.
> "web browsers are good at handling complex and long-lived DOM trees with dynamic changes now"
Is there an alternative renderer/something that handles "complex and long-lived $something-trees with dynamic changes" better than web engines does?
They've been optimized for just that during decades at this point, with huge investments both in human-hours and money. Hard to imagine there is something else that can handle that better than browser engines.
I spend a lot of time with DNS (it just happened, man) and I see the same general thing where a lot of intention and expertise is imputed to others. True systems thinking around large systems is rare. Buffer Bloat (and the lack of progress in remediation, and the lawfare and near literal acts of congress due to misattribution and misunderstanding of the problem) as this thing slouches along and utters its phlegmatic growl could be emblematic.
Absolutely fantastic writing —- clear, kind, authoritative. Don’t agree with it all but I think the last section sums it all up nicely, and is something I’ve felt for a while.
With new SSR frameworks like Next.js, I think this whole MPA/SPA dichotomy starts to dissolve a little bit. I’m thrilled that browser standards are evolving to help it along!
It would be interesting to measure battery savings made by disabling JavaScript on mobile devices. While it might be cheaper to plop website on GitHub Pages or Netlify and others, somehow I feel that all costs are just handed down to the user in bandwidth and battery use.
[+] [-] ryanbrunner|2 years ago|reply
The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites because it sounds sexier and cooler to work on something complex than something simple. The phrase becomes "well, what about Figma and Photoshop (and my mostly CRUD SaaS)?"
I think a valuable insight that the MPA / minimal JS crowd is bringing to the table is the idea is that you shouldn't strive for cool and complicated tools, you should strive for the simplest tool possible, and even further, you should strive to make solutions that require the simplest tools possible whenever you can.
[+] [-] jayceedenton|2 years ago|reply
The motte: SPAs are a good way to write highly complex applications in the browser, like Photoshop and Figma, to compete with desktop apps.
The bailey: SPAs are a good way to write most web applications.
If you attack the bailey, proponents retreat to the motte, which is hard to disagree with. With the motte successfully defended, proponents return to the bailey, beneficial for those enthusiastic about SPAs but much harder to defend.
The only way to tease this issue apart is to stick to specifics and avoid casting SPAs or MPAs as universally good or bad. Show me the use-case and we can decide which route is best.
[+] [-] christophilus|2 years ago|reply
Also, for me, the statically typed, component-based approach to UI development that I get with Preact is my favorite way to build UIs. I’ve used Rails, PHP, ASP (the og), ASP.NET, ASP.NET MVC, along with old-school native windows development in VB6, C# Winforms, whatever garbage Microsoft came up with after Winforms (I forget what it was called), and probably other stacks I’m forgetting. VB6 and C# Winforms were the peak of my productivity. But for the web, the UI model of Preact is my favorite.
[+] [-] diggan|2 years ago|reply
If you want to build a vector editor in the browser then yes, probably you want to leverage webassembly, canvas, webgl or whatever you fancy.
But if you're building a more basic application (like most CRUD SaaS actually are) then probably you don't want to over-complicate it, as you instead want to be able to change and iterate quickly, so simplest tools and solutions gives you the most velocity for changes, until you've figured out the best way forward.
Trouble is to recognize where on the scale of "Figma <> news.ycombinator.com" you should be, and it's hard to identify exactly where the line gets drawn where you can justify upfront technical innovation in favor of true-and-tested approaches.
[+] [-] ksec|2 years ago|reply
The thing about JS SPA is that they are hard to make it 100% right. Even the simplest thing. And this goes back to the topic about Web Development and computing. Modern day web is designed by Google for Google. Making things easier for 98% of the web simply isn't their thing. And that is not just on the Web but everything else they do as well. And since no one gets fired for using what Google uses, we then end up with additional tools to solve the complexity of what Google uses.
Depending on how you count it we are now fast coming close to 20 years of Google dominance on the web. And there hasn't been a single day I didn't wish for an alternative to compete with them. I know this sounds stupid. But may be I should start another Yahoo.
[+] [-] kgdiem|2 years ago|reply
I’ve gone back to making MPA apps with minimal JS. It helps me actually ship my projects rather than tinkering and having an over complicated setup for mostly CRUD tasks.
In one project that is a bit more data intensive and interactive I’m using Laravel Breeze / Laravel + inertajs (SSR react pages).
I’m also a big fan of Jekyll lately, I made my own theme on Thursday with only 2 tiny scripts for the mobile menu and submission of the contact form.
Using DOM APIs and managing a little bit of state is fine for many, many projects.
OTOH when you don’t control the requirements and the business asks for a ton of stateful widgets progressive enhancement can become a mess of spaghetti in the UI and API unless very carefully managed and well thought out. At that point you might as well go all in on React/Angular/Vue, especially when you have to account for a mix of skill levels and turnover.
[+] [-] blackoil|2 years ago|reply
It is not only Figma or Photoshop. Any site with multiple steps of interactions or complex filters over search result etc. benefit from SPA and declarative code. The experience is smoother and development of anything, but simple forms is much faster.
People disabling JS or working on satellite internet from a remote island are fringe cases and are not relevant for the business.
[+] [-] endisneigh|2 years ago|reply
Why do you believe this? I couldn’t disagree more. People should strive for the most effective tool, and most of the time that’s what they already know, unless some new tools’ efficacy outweighs cost to learn it
[+] [-] dustingetz|2 years ago|reply
developers are salarymaxxing first, second virtue signaling to support their case in their employers' selection process, third work-minimization and pain-minimization. Even the Simplicity Paladins are min/maxxing the same three priorities, perhaps weighing pain-minimization above salarymaxxing, yet still subject to the same invisible macro forces that shape our lives. and I postulate that this is a complete explanation of developer behavior at scale.
[+] [-] acdha|2 years ago|reply
This is very similar to the NoSQL arc. Some people at prestigious places posted about some cool problems they had, and a generation of inexperienced developers started that they needed MongoDB and Cassandra to build a CRUD app with several orders of magnitude fewer users, transactions, or developers. One of the biggest things our field needs to mature on is the idea of focusing on the problem our users have rather than what would look cool when applying for a new job.
The SPA obsession has been frustrating that way for me because I work with public-focused information-heavy sites where the benefits are usually negative and there’s a cost to users on older hardware – e.g. the median American user has JavaScript performance on par with an iPhone 6S so not requiring 4MB of JS to display text and pictures has real value – but that conflicts with hiring since every contractor is thinking about what’ll sound “modern” on their CV.
[+] [-] yetanother12345|2 years ago|reply
Wikipedia states that
... but some coders, including yours truly, has been brought up with that principle as a keystone of programming from day one (which was decades ago). It is related to the more modern DRY principle.If this is brought to the table now it is only seemingly so, caused by the fact that those at the table must have forgot it, or never learned it. Of course, there are also commercial interests in keeping things as complicated as possible - it could just be that these have had too much influence for too long.
[0] https://en.wikipedia.org/wiki/KISS_principle
[+] [-] Yaina|2 years ago|reply
A lot of web developers strive for some amount of templating and client-side interactivity on their websites. And when frameworks like React came up they solved interactivity issues but made it hard to integrate into existing server-side templating systems, which were mostly using different programming languages.
So because integrating the frameworks for client-side interactivity was hard, the frameworks also took on the job of templating the entire site and suddenly SPAs were popular. I think a big draw here was that the entire tooling became JavaScript.
But the drawbacks were apparent, I guess a big one was that search engines could not index these sites and of course performance, so the frameworks got SSR support. The site was still written in the framework, rendered to HTML on the server and then hydrated back to a SPA on the client.
Now, even more recently we got stuff like islands, where you still use the handy web framework but can choose which parts of your site should actually be interactive (i.e. hydrated) on the client. And I believe this is the capability that has just long been missing. Some sites require no JS on the client (could even be SSGs), others require a little interactivity, and some make most sense as full blown SPAs.
We're finally entering the era where the developer has that choice even though they use the same underlying framework.
[+] [-] amadeuspagel|2 years ago|reply
[+] [-] jauntywundrkind|2 years ago|reply
Having a stable reliable generally applicable decision/toolset you can apply beats this hunt for optimization to smithereens. Don't optimize case by case. Optimize for your org, for your life, lean into good tools you can use universally and stop special casing your stuff. There's nearly no reason to complicate things by hounding for "simplicity." Other people won't be happier if you keep doing side quests for simple, and you won't be either.
(Do learn to get good with a front-end router, so you can avoid >50% of the practical downsides of SPAs. And I hope over time I can recommend WebComponents as a good-for-all un-framework.)
[+] [-] toasted-subs|2 years ago|reply
Especially as one man teams it just doesn't make sense. Any on teams with multiple people having relatively static HTML is a really effective abstraction.
[+] [-] palata|2 years ago|reply
I, for one, don't want them rendered in my browser. I have an OS that can run apps, and I want my browser to be an app that renders simple HTML pages. If you want an app, make a damn Desktop app that can run on my OS.
[+] [-] traverseda|2 years ago|reply
[+] [-] JackMorgan|2 years ago|reply
I had some nostalgic notion that MVC was going to be a smooth ride. That all this JS cruft was slowing me down. Then I needed a search bar, then validation, then I kept running into weird surprises with Razor templates. After a month I ruefully concluded I'd have gotten it done a heck of a lot faster just using my usual stack.
I now regret all those times I complained about how much bloat there is in the JS ecosystem. Yes, is it possible to make crud apps with Razor, sure, people do it everyday. But for me, with a project with increasingly complex display logic and validation, I definitely should have tried a little harder to talk them into using Next and TS.
As an aside, I was surprised that getting HMR with C# is kind of a pain. I never figured out how to get it while debugging. So every time I wanted to debug some new issue with the template not sending data right to the controller (which felt like every few seconds of work) I'd have to restart the server and wait 10+ seconds for it to restart and then renavigate to where I was, reenter my form fields, and then try again. After I was done and wanted HMR, I'd have to restart the server again. That extra hassle really started to grind on my patience.
[+] [-] mjburgess|2 years ago|reply
Everything running in a browser is interpreted. There is no reason for webdev to require a build step, and it largely does so because JS standards haven't delivered anything around static typing. Even a "this syntax is valid but ignored" would enable IDEs to provide checks via LSPs, but for no-build running.
Build steps and development iteration overhead is something to be avoided at all costs. For it to have been introduced, with multi-second latency, to web dev. is a sign of developer experience dropping off a cliff.
Time-to-iterate, tool quality, release speed, etc. are essential to being able to build mental models of code (etc.)
[+] [-] croes|2 years ago|reply
That's exactly the reason for a build step.
In a build step the code can be optimized to reduce and/or accelerate the code that needs to be interpreted.
[+] [-] toasted-subs|2 years ago|reply
[+] [-] pgorczak|2 years ago|reply
[+] [-] Waterluvian|2 years ago|reply
[+] [-] eugenekolo|2 years ago|reply
[+] [-] mostlylurks|2 years ago|reply
A build step is a huge barrier that makes authoring your own websites require significantly more expretise than it otherwise would. It thus makes web development less accessible, and puts anything even a little bit complicated out of reach of anyone who isn't already an experienced web developer or exceptionally dedicated. It also discourages the slow development of some pet website of a not-primarily-a-web-developer-by-trade into something more featureful, since expanding beyond the point where you can still reasonably avoid a build step suddenly requires acquiring a lot more exprertise and expending a lot of effort all in one go, instead of just being a smooth expansion into a more featureful project.
The simpler it is to make your own website without having extensive web development experience, the more people will be able to have their own website instead of being directed to the endless array of corporate silos like social media (which has largely replaced personal websites) and corporate middlemen (which have largely replaced in-house commercial websites for smaller actors) that take care of the burden of making your own website for you, with some rather obvious downsides.
[+] [-] Yaina|2 years ago|reply
Like, if you have two or more pages, you probably want them to share the same header or footer, and unless you want to A) repeat the same markup on each page or B) inject them with a client-side script, you will need some sort of build step. There are more accessible solutions out there, like Hugo.
How else are you going to achieve that? Sure you could use PHP, but I don't see how that is more accessible or maintainable than having a build step.
[+] [-] inopinatus|2 years ago|reply
”In a high-level language like C...” - chip designer
”In a low-level language like C...” - application programmer
[+] [-] rramadass|2 years ago|reply
[+] [-] graypegg|2 years ago|reply
[+] [-] danielvaughn|2 years ago|reply
Not to nitpick his nitpick, but...I've said this exact thing in the past, and his parenthesized explanation is what I meant. It's too much of a mouthful to try and be super accurate and specific all the time.
[+] [-] endisneigh|2 years ago|reply
Pick what solves your problem. In the context of web development that overwhelming means using whatever is most popular given your preferred programming language.
[+] [-] croisillon|2 years ago|reply
[+] [-] jefftk|2 years ago|reply
This may actually be an ePrivacy limitation (cookie law) not a desire to avoid JS. Persisting the setting across pages requires client size storage, which in ePrivacy countries requires either that the setting be obvious to the user that it's persisted, getting per-action consent (ex: a "preferred language" setting) or site-wide user consent (a cookie pop-up).
[+] [-] gbuk2013|2 years ago|reply
[+] [-] Gustek|2 years ago|reply
Things get a bit muddy when sending to server but even then you may not need a consent if it is a feature that is required for correct working of the website or better experience without tracking and profiling.
[+] [-] bacro|2 years ago|reply
[+] [-] g9yuayon|2 years ago|reply
So, what's wrong with the old way of having embedded web assets in a service for building simple admin pages?
[+] [-] mgaunard|2 years ago|reply
[+] [-] cabalamat|2 years ago|reply
...and if jQuery and WordPress do the job, they are sound technical decisions.
What is not a sound technical decision is forever chasing the latest fashion in technology. Also known as the "oh look a shiny thing development paradigm".
[+] [-] amadeuspagel|2 years ago|reply
Maybe MDN should have a comment section, like the PHP docs. That would be more representative.
[+] [-] rramadass|2 years ago|reply
What does it say about this whole domain when, as the author says, Web Apps and Sites/Blogs (also add in Mobile apps) are so very different from each other using a myriad set of technologies each of which has a learning curve? Where is the uniformity and commonality in all this? Why are developers even perpetuating this?
That said, this might be a good place to ask for recommendations for study since i am not a "Web Developer";
1) Comprehensive books/other sources on full-stack Web App and Site development. Bonus points if they use a single language for frontend/backend/everything else.
2) The same as above but using C/C++ languages.
[+] [-] calderwoodra|2 years ago|reply
The problem is that users/customers have higher expectations for their user experiences than the web can offer on mobile/desktop/etc.
Robinhood, Duolingo, Slack are a few good examples of UX being huge differentiators.
[+] [-] troupo|2 years ago|reply
[+] [-] hyperhello|2 years ago|reply
[+] [-] diggan|2 years ago|reply
Is there an alternative renderer/something that handles "complex and long-lived $something-trees with dynamic changes" better than web engines does?
They've been optimized for just that during decades at this point, with huge investments both in human-hours and money. Hard to imagine there is something else that can handle that better than browser engines.
[+] [-] m3047|2 years ago|reply
[+] [-] mouzogu|2 years ago|reply
then you will spend 5 hours replacing/updating deprecated/newly incompatible npm/system issues.
[+] [-] spcebar|2 years ago|reply
[+] [-] bbor|2 years ago|reply
With new SSR frameworks like Next.js, I think this whole MPA/SPA dichotomy starts to dissolve a little bit. I’m thrilled that browser standards are evolving to help it along!
[+] [-] butz|2 years ago|reply