Most people here are criticizing the author for doing some dumb things. I concur, but still think they have a good point.
First, keep in mind that the author's use case is a content-heavy app with sprinkles of interactivity. This is very important because it sets the "webpage speed" goalpost to a concrete place: they want good lighthouse/first load times and good SEO.
I've fallen into the same pit before. I've used Create React App with a custom server-renderer, Gatsby and Next in different projects. None of the solutions is truly satisfactory for the author's use case for a single very strong reason: React's hydration process is both blocking and slow. I hope that sooner rather than later React is able to offer a good solution for incremental hydration, but it seems quite far for now.
Once you realize this, the only way to keep using React is to step out of the mainstream and play with multiple render roots, parts of the page that never get hydrated and so on. It is possible to do things here, but it is definitely a rocky path.
Of course, there are many wrong things the author explains that you can avoid, but I'll throw a bone to them here too. Most "wrong things to do" they explain are both wrong and understandable. And they openly accept it.
For instnace, one wrong thing to do that I've had to fight against a lot is JS-based device-specific rendering. It is so much easier to implement a "mobile ? <MobileScreen /> : <DesktopScreen />" than to make a single screen that adapts properly using CSS that it's not even funny. Unfortunately, it also breaks SSR, leads to janky page-loads and poor performance.
I fully agree that as of today and for content-heavy sites React pushes you towards a pit of despair instead of a pit of success. You can make it work, but ... is it worth it?
The initial motivation for intercooler.js (which the author forked) was performance. I was working on a large bulk table update and building the table dynamically in javascript. The performance was terrible (this was back in 2012, no idea what it would be like today).
I realized that I could just deliver and slam HTML into the DOM and that the browser engine, written in C, was very fast at rendering it.
That turned into a pretty big javascript function, which then turned into intercooler, which then turned into htmx:
This is the typical "we didn't spend any time thinking about our architecture therefore we're going to blame our framework" article.
React is a great choice for certain use-cases, but when low-quality developers are allowed to pick it up and apply it to everything you end up in a mess. The same thing happens with literally any tool.
If you want speedy initial interaction times and manageable codebases, (and requirement X) use the right tools for the job, and instil better, thoughtful, development culture.
In general, these posts sound a bit like this: I tried jQuery, and after a while it all became a mess; took up a Backbone project, and was good for a while, but eventually it became too complex; then I worked on Angular and that seemed a big improvement, but then... and finally with React my architectures are clean.
While the reality is more like this: I had 6 months of programming experience and used jQuery and made a disaster; with 1.5 years of experience I used Backbone and I fared better; with 3 years of experience I tried Angular and I was able to build a decent size application but ultimately shot myself in the foot; and now that I have 6 years of experience my software quality has improved a lot, it must be react!
React has been god sent for us as we slowly modernise an application that is a mess of server rendered html and hacked together frontend code.
Slowly we are rewriting individual pieces as embedded React components (no SPA here) and moving to a proper API layer that the components talk to. The separation of concern has made it a loot easier to increase code coverage and ensure a controlled rollout of new features.
We also just bit the bullet and paid for syncfusion to use on our frontend to avoid reinventing the wheel for a lot of the functionality we need.
> we’ve discovered that React also leads to some questionable practices. Like hovers in JS (rather than in CSS), drop-down menus in JS, not rendering hidden (under a hover) text (Google won’t be happy), weird complex logic (since it’s possible!), etc.
This is some strange reasoning that I have yet to seen myself. If you can do the hovers states/drop down menus in CSS, why not do them in CSS, even if you're using React? Seems to be blaming something on a library that the library has no care about in the first place (which to be frank, seems relatively common in web dev circles).
> In the worst case, we would serve you 2.5MB of minified (non-gzipped) JS
And holy guacamoly, how do you end up with this?! Seems that something was surely wrong in the compilation options, forgetting to mangle names or something, missing dead-tree elimination maybe?
Talking of bundle sizes, this isn't inherent to the React point being made here but it's arguably too easy to blow up what you're serving. A common example I like to reference is Moment[1] — unpacked size is 4MB, most of which is different locales. If you _don't_ want to bundle all of those with your project, you have to do extra work, instead of making locales opt-in. Perhaps slightly less used, but a more prominent example, a widely-used table component Ag-grid[2] is 25MB before being packed down. Instead of being served as a modular set of components, it's mostly a big blob with minimal separation of concerns.
The takeaway here is twofold: we need better tooling as well as better practices for how to use those tools. It's both an educational problem as well as a toolset issue. It is possible to build lean and fast pages, but it's currently considerably harder than building gargantuan monsters. I don't see the web bloat problem going away until the dynamic here is flipped — it should be easy to ship small bundles even with minimal experience.
That first quote (using JS when CSS would be faster and simpler) does remind me of being a beginner and always arriving at jQuery's animate() for all my needs.
That said, animation is hard, and React doesn't insulate you from that, so it takes some thought. In fact, being a layer of abstraction, it requires even more understanding of animation. Fading a component in is easy with regular CSS transitions. But fading a component out generically means that you have to keep the component mounted for the transition.
Perhaps anyone who has used react-bootstrap's Transition system (animate={true} on modals and tooltips) has run into those quirks where you start seeing a pattern of animate={false} fixing all sorts of random bugs.
With React, if you're grabby with 3rd-party libraries, you can end up with a russian doll of HoCs, each one from a different library, that are so generic that they kill performance and it's not even obvious why without being a profiler expert. Whereas without React, you wouldn't have found those solutions at all, so you roll your own cross-cutting solution.
That isn't necessarily a problem with React, but something that you have to resist with frameworks in general. It reminds me of Ruby on Rails, googling "rails avatars" and ending up with two libraries like carrierwave + "has_avatar" when you could have just built your own simple solution. It's a form of technical debt. After all, it's hard to justify the effort of deabstraction when you're assigned to 100 other issues.
> Seems to be blaming something on a library that the library has no care about in the first place
Libraries and frameworks establish idioms, which encourage or discourage certain patterns. In my experience React / JSX definitely encourage complexity and abstraction by making display and logic so intertwined, especially with hooks.
> And holy guacamoly, how do you end up with this?
Libraries upon libraries, one tiny problem at a time. Unless you're in a very small team, or have very strict policies for adding new dependencies and vetting their impact on bundle size, this will inevitably happen.
Yeah, those are choices. You still have to understand what you're actually doing in the browser and the DOM, same as jquery or vanilla js.
I can see how easy it would be to accidentally get a bonkers js bundle if you start using external dependencies without monitoring how much code they ship.
Does React have a perf cost? Absolutely. But there's something else going on here.
> This is some strange reasoning that I have yet to seen myself. If you can do the hovers states/drop down menus in CSS, why not do them in CSS, even if you're using React?
Because when requirements change and your hover animation needs some additional flare to it in certain cases, you'll be glad that you had automated unit tests to verify original behavior still works along with your customizations.
> This is some strange reasoning that I have yet to seen myself. If you can do the hovers states/drop down menus in CSS, why not do them in CSS, even if you're using React? Seems to be blaming something on a library that the library has no care about in the first place (which to be frank, seems relatively common in web dev circles).
Let's all be honest, that's complete laziness or lack of knowledge by the developer.
> And holy guacamoly, how do you end up with this?! Seems that something was surely wrong in the compilation options, forgetting to mangle names or something, missing dead-tree elimination maybe?
I have been with React for 4 years now. Link state can be driven through React if you are testing some state or URL (active URL). But hover should never be React driven. I can not fathom why anyone would do that.
The framework and libraries and information you find on the internet definitely encourages you to go for animations via JS. Might be a holdover from before CSS3, but I'm not sure I've ever seen people really try to use CSS3 for their animations unless people were working on their own toys. Just google "animated X react" and 80-90% of the articles are massive JS components with a little styling in the CSS.
For size of the bundle, just remember the node_modules black hole meme. The amount and size of JS libraries is no joke, it's wildly out of control. 2.5MB minified non-gzipped is common mostly because of a paradigm of "once you gzip it it will be small, and inflating that doesn't cost anything, and this way everything is preloaded!". Libraries come with a bunch of images embedded as Base64 encoded strings, the full localization tables, 40 1kb depedencies (left pad and friends), etc. etc. etc. These are all the default and no one changes defaults.
To make a reasonable web application today without all the insanity, you have to be very disciplined, because everything is pointing you toward doing dumb or crazy things.
When I first read the haiku at the bottom of the HTMX homepage, I had a flash of insight.
javascript fatigue:
longing for a hypertext
already in hand
What are we doing? State management libraries? Hypermedia is the engine of application state. Send data to client as HTML and have the client apply styles over it, rather than converting from JSON to local objects and storing in some sort of reactive data store. You will find that semantically structured HTML is almost as economical as JSON.
We need to look at it from two sides: if it’s good for developers and if it’s good for users. React was great at former and terrible at later.
React is not "bad for users". Developers build complex, fast apps with React all the time. It can be fast, but if you make mistakes with it then it's easy to make something very, very slow.
The app I work on is huge. It's ~8MB of uncompressed React + Redux in development (much smaller in production though), and pages include a lot of assets so they can weigh in at 23MB in the worst cases. A complex page can have up to 60,000 DOM nodes under React's control with thousands of event listeners. It starts in about 3s and never drops below 60fps.
> A complex page can have up to 60,000 DOM nodes under React's control with thousands of event listeners. It starts in about 3s and never drops below 60fps
Can one really make such affirmations regarding client rendered web apps? I'm assuming these numbers aren't solely measured on localhost in some state of the art development machine/device so won't it depend on the client's machine specs, browser, usage, bandwidth, etc?
This is admittedly nit-picky, but you're not managing 60k DOM nodes if you're using a virtual list, that's the whole point of virtualization. You might have 60k items in your list but you're only ever rendering a tiny subset of those items based on what is visible (with some overlap).
> we’ve discovered that React also leads to some questionable practices. Like hovers in JS (rather than in CSS), drop-down menus in JS, not rendering hidden (under a hover) text (Google won’t be happy), weird complex logic (since it’s possible!), etc. You can have a React app without those problems, but apparently, you have to have better self-control than we had (nobody’s perfect!).
idk if this is fair. OP was using some pretty niche tools (clojure) whereas best practice React metaframeworks like Nextjs may have addressed some of those pagespeed issues.
additionally, I highly doubt that the author has replicated this functionality by using his new turbolinky framework.
So the post is better titled "I improved webpage speed by throwing away React AND a bunch of UI requirements". which is fair dinkum, but less exciting.
As others have commented, this doesn't seem like it's React's fault.
But it does illustrate how React isn't a magical solution to the front-end woes and how complicated the whole thing still is to do right.
I'd still use React for projects, but for now I've been incredibly happy with the LiveView solution that Phoenix/Elixir offers (or the variants for other frameworks. Blazor for C#, LightWire for Laravel/PHP?).
It's surprising how often I'll work on something and realize that the solution is quite simple now, where before it would definitely mean some serious thinking.
For example: libraries. I remember so many projects where I needed do do some date formatting or manipulation. Moment.js was the obvious solution, but including the whole library was not an option.
With LiveView I can just pull in whatever dependency I want, because it's all server-side. It's only the markup diff for the specific component that gets sent down the wire.
Or security. I need to show a user, but only a 'friend' can see the email address (or other profile details). The 'old-fashioned' way would involve separate API calls and a certain nervousness that perhaps I might end up in a situation where the front-end behaves how I expect, but the API calls somehow expose non-friend data.
With LiveView I can just add a conditional statement to the view, and since the resulting HTML is all that does to the client, I'm done!
Of course this only works with a persistent and relatively low-latency connection, but I can't remember the last time I worked on a project where this wasn't an implicit assumption.
I'm perfectly happy using React/Next/Vue when necessary, but it's a really strange experience to read these kinds of articles and threads these days when for so much of my day to day these problems just went away.
How on earth does React encourage the bad practice of “hovers in JS”? React absolutely has zero influence on this - that one is entirely on the developers.
I’m also exceptionally confused why the Pagespeed score was sitting at just 5/100 - it doesn’t sound like the dev team actually understood the result and attempted to resolve it.
It sounds like the author has found some new technology they’re happy with for now, but it sounds like all the previous problem were self inflicted and they’re bound to repeat them again.
While different tech and frameworks has an influence in the problems you’ll face, you’ve still got to do good programming.
Reading all of the cynicism on the comments I wonder if we are taking into account the business context for the development of the app. I’m not a fan of react even though I use it daily, but the licentious nature of react allows me to build/modify features pretty quickly, albeit, low quality. I’m as idealist as the next person about a good architecture, but if the code you are writing is for a new product that is still trying to find a market-fit spending a lot of time in the architecture of something that could drastically change doesn’t make much sense to me.
All that I’m saying is that, as opposed to some other opinionated frameworks, react let’s you do pretty much whatever you want, which is great for speed development but can also exhausting in the huge amount of decisions that need to be made.
Not opposed to server rendering or backend heavy sites, but a lot of people are able to get much better Lighthouse scores for fairly complex sites through a combination of webpack code splitting, service-worker based caching, and CDNs.
So, the problems here may have had more to do with the clojurescript stack (which I am not much familiar with), or author's lack of familiarity with javascript optimization strategies than react, SPA model or client side rendering.
For sites that are content-heavy I've started wondering if it makes sense to have the content server-side rendered the old-fashioned way, and use multiple React roots for the parts of the page that need to be interactive. You can still use Redux etc. for managing app state globally (though not React context), and you get most of the gains of load time, and of using React where you need it.
It seems everyone uses <body><div id="app-root"></div></body> but React is perfectly happy being scattered around the page.
It's just a thought, I haven't tried it myself yet - has anyone else?
We do exactly this within SharePoint. We have various react parts that look things up in the database - sometimes quite heavy queries for reporting.
We have put together a small collection of fairly generic react apps that mean we can put one on a page, put in some config and have it displaying a report generated in SQL in about as long as it takes to write the SQL for the report. We have used this approach for generic SQL reports, SharePoint lists, imgage retrival and entry forms. Having the library of options to hand has meant we can confidently assemble a new (fairly complex) page much quicker than previously - and be confident that it will work as it is reusing known good components.
Depends on your business case / use case and where you want to draw lines. A lot rides on which parts you perceive as being "application" and which parts aren't.
For instance, you could build an e-commerce shop as an SPA, or you could target specific parts that process user interaction dynamically / fluidly - such as the checkout process, adding to a cart,... - and consider those as separate applications.
There are also trade offs. Search and navigation, for instance. You could build an SPA for an entire search engine. But then you may end up doing heavy lifting, like dynamically managing URL state through routing components, which is something browsers already do themselves: the only gain being that you don't reload the entire page.
So, the big question boils down to: what are you really trying to solve? A UI/UX problem? A performance problem? A maintenance problem? Something else? And who are the stakeholders, who's using the stuff you're gonna build? What are their intentions and motives?
That's when you come, to a conclusion: there's no silver bullet. The architectural design you choose needs to be an informed choice above anything else. And it should be informed by your specific context rather then the affordances provided by the tools at your disposal.
The hard part is sitting down and taking a bit of time up front to think and articulate an argument that, given your context, validates choosing a particular strategy. (Personally, I tend to sit back and stare at the ceiling with my notebook and a pencil, but that's just me.)
At my old company we did this approach for some of our pages. We were transitioning away from our old codebase page by page but rewriting the codebase took longer.
New features were still desired so we built/repurposed react/redux components into what we called “hybrid” pages. It worked great for us!
This map is actually a wordpress plugin that loads locations from google sheets into a JS var that a react bundles loads. Client already had the rest of the site built in Wordpress so easier to just make a plug-in with a shortcode that loads the react bundle into a div.
Yes, a hybrid. Routing and initial rendering handled by a server-side MVC framework, like Laravel. And on each page you can have one or more micro SPA-s with a lightweight JS framework like Mithril or Preact. You can even send the initial payload as JSON in a special JS global variable inside a script tag.
I've been doing Ajax practically since Day 1, and in the olden days when I was young and stupid I actually sent DIV IDs back to the server, which the server would then reply with -- in XML -- to tell the client which DIV to replace.
Then I realized Javascript could do lexical closures which meant the client could keep track of the targets, which also meant the server no longer had to care about things servers shouldn't care about.
The next realization was that DIV IDs are global variables and thus in most cases a bad idea, so now my event handlers automatically search upward (and sometimes sibling-wise) from 'this' for DIVs matching classes or other selectors, to keep the scope local.
This TwinSpark library seems like the next iteration of all the good ideas, plus even more flexibility and proper separation of concerns, without JQuery or other dependencies.
Just use Svelte. It's a super simple API, the "trickiest" part is the "reactive sections" but the whole API and reactivity part you can grok in an afternoon.
In the past I have used Angular and React - these days I just always reach for Svelte (speed and simplicity).
I was happy learning Svelte, felt like a breath of fresh air with how simple it was in comparison to the other more popular frameworks...but then when I started to kick the tires of Sapper I quickly realized that that's where all the complexity was shunted to
I would also recommend to anyone to read his blogs about different approaches to client side rendering, and the technology behind Solid. It is a really good read regardless of the choice of using the library.
> It supports only modern browsers (not IE or Opera Mini) but drops that 88kb monster.
Whenever I read this, I usually take it the author is being serious. However, how long does it actually take to load a 88kb library these days, is it really 'monstrous'? If this was the authors top performance drain based on profiling, I commend their technical abilities.
Well, the reason why I started writing my own library is because I couldn't get all functionality I needed from Intercooler. Why it does not depend on jQuery? Because I wanted no dependencies. :-)
And a "monster" is sarcasm referring to that React bundle I described before.
Like what some people said 5/100 on the page speed score is pretty bad. My site used to have this. It boiled down to two things that caused it.
1) I wasn't code splitting by page, and had a couple heavy dependencies (which made things worse)
2) I wasn't pre-rendering my create-react app via react-snap. Meaning during the CI build react-snap runs Puppeteer visits each page on the site, and generates ready-to-go html versions of each page.
Those two changes took me from a 7/100 to a 95+/100 in short order. Makes logical sense too. Now a days the site hovers at 85/100. But I don't have the time right now to reinvestigate it.
With the options at the time, I'm happy with my tech choices. If I had to start with React again today. I would do Next.JS with a static build output. Would save me a bunch of time scaffolding things.
[+] [-] kilburn|5 years ago|reply
First, keep in mind that the author's use case is a content-heavy app with sprinkles of interactivity. This is very important because it sets the "webpage speed" goalpost to a concrete place: they want good lighthouse/first load times and good SEO.
I've fallen into the same pit before. I've used Create React App with a custom server-renderer, Gatsby and Next in different projects. None of the solutions is truly satisfactory for the author's use case for a single very strong reason: React's hydration process is both blocking and slow. I hope that sooner rather than later React is able to offer a good solution for incremental hydration, but it seems quite far for now.
Once you realize this, the only way to keep using React is to step out of the mainstream and play with multiple render roots, parts of the page that never get hydrated and so on. It is possible to do things here, but it is definitely a rocky path.
Of course, there are many wrong things the author explains that you can avoid, but I'll throw a bone to them here too. Most "wrong things to do" they explain are both wrong and understandable. And they openly accept it.
For instnace, one wrong thing to do that I've had to fight against a lot is JS-based device-specific rendering. It is so much easier to implement a "mobile ? <MobileScreen /> : <DesktopScreen />" than to make a single screen that adapts properly using CSS that it's not even funny. Unfortunately, it also breaks SSR, leads to janky page-loads and poor performance.
I fully agree that as of today and for content-heavy sites React pushes you towards a pit of despair instead of a pit of success. You can make it work, but ... is it worth it?
[+] [-] recursivedoubts|5 years ago|reply
I realized that I could just deliver and slam HTML into the DOM and that the browser engine, written in C, was very fast at rendering it.
That turned into a pretty big javascript function, which then turned into intercooler, which then turned into htmx:
https://htmx.org
[+] [-] tom_walters|5 years ago|reply
React is a great choice for certain use-cases, but when low-quality developers are allowed to pick it up and apply it to everything you end up in a mess. The same thing happens with literally any tool.
If you want speedy initial interaction times and manageable codebases, (and requirement X) use the right tools for the job, and instil better, thoughtful, development culture.
[+] [-] Udik|5 years ago|reply
While the reality is more like this: I had 6 months of programming experience and used jQuery and made a disaster; with 1.5 years of experience I used Backbone and I fared better; with 3 years of experience I tried Angular and I was able to build a decent size application but ultimately shot myself in the foot; and now that I have 6 years of experience my software quality has improved a lot, it must be react!
[+] [-] christkv|5 years ago|reply
Slowly we are rewriting individual pieces as embedded React components (no SPA here) and moving to a proper API layer that the components talk to. The separation of concern has made it a loot easier to increase code coverage and ensure a controlled rollout of new features.
We also just bit the bullet and paid for syncfusion to use on our frontend to avoid reinventing the wheel for a lot of the functionality we need.
[+] [-] capableweb|5 years ago|reply
This is some strange reasoning that I have yet to seen myself. If you can do the hovers states/drop down menus in CSS, why not do them in CSS, even if you're using React? Seems to be blaming something on a library that the library has no care about in the first place (which to be frank, seems relatively common in web dev circles).
> In the worst case, we would serve you 2.5MB of minified (non-gzipped) JS
And holy guacamoly, how do you end up with this?! Seems that something was surely wrong in the compilation options, forgetting to mangle names or something, missing dead-tree elimination maybe?
[+] [-] Etheryte|5 years ago|reply
The takeaway here is twofold: we need better tooling as well as better practices for how to use those tools. It's both an educational problem as well as a toolset issue. It is possible to build lean and fast pages, but it's currently considerably harder than building gargantuan monsters. I don't see the web bloat problem going away until the dynamic here is flipped — it should be easy to ship small bundles even with minimal experience.
[1] https://www.npmjs.com/package/moment
[2] https://www.npmjs.com/package/ag-grid-community
[+] [-] hombre_fatal|5 years ago|reply
That said, animation is hard, and React doesn't insulate you from that, so it takes some thought. In fact, being a layer of abstraction, it requires even more understanding of animation. Fading a component in is easy with regular CSS transitions. But fading a component out generically means that you have to keep the component mounted for the transition.
Perhaps anyone who has used react-bootstrap's Transition system (animate={true} on modals and tooltips) has run into those quirks where you start seeing a pattern of animate={false} fixing all sorts of random bugs.
With React, if you're grabby with 3rd-party libraries, you can end up with a russian doll of HoCs, each one from a different library, that are so generic that they kill performance and it's not even obvious why without being a profiler expert. Whereas without React, you wouldn't have found those solutions at all, so you roll your own cross-cutting solution.
That isn't necessarily a problem with React, but something that you have to resist with frameworks in general. It reminds me of Ruby on Rails, googling "rails avatars" and ending up with two libraries like carrierwave + "has_avatar" when you could have just built your own simple solution. It's a form of technical debt. After all, it's hard to justify the effort of deabstraction when you're assigned to 100 other issues.
[+] [-] ricardobeat|5 years ago|reply
Libraries and frameworks establish idioms, which encourage or discourage certain patterns. In my experience React / JSX definitely encourage complexity and abstraction by making display and logic so intertwined, especially with hooks.
> And holy guacamoly, how do you end up with this?
Libraries upon libraries, one tiny problem at a time. Unless you're in a very small team, or have very strict policies for adding new dependencies and vetting their impact on bundle size, this will inevitably happen.
[+] [-] coffeefirst|5 years ago|reply
I can see how easy it would be to accidentally get a bonkers js bundle if you start using external dependencies without monitoring how much code they ship.
Does React have a perf cost? Absolutely. But there's something else going on here.
[+] [-] piranha|5 years ago|reply
[+] [-] swyx|5 years ago|reply
[+] [-] phaedryx|5 years ago|reply
"Wait," we told ourselves. "Surely, we can do some tree-shaking, replace momentjs, be strategic in our imports and shave a ton of that off"
With all of that we got it down to 1/2 of the size, but that was still too big for our needs.
[+] [-] 1f60c|5 years ago|reply
The practice of removing unneeded code from a bundle using static analysis is commonly known as “tree-shaking”, but I like your version better. :D
[+] [-] rmrfrmrf|5 years ago|reply
Because when requirements change and your hover animation needs some additional flare to it in certain cases, you'll be glad that you had automated unit tests to verify original behavior still works along with your customizations.
[+] [-] traveler01|5 years ago|reply
Let's all be honest, that's complete laziness or lack of knowledge by the developer.
> And holy guacamoly, how do you end up with this?! Seems that something was surely wrong in the compilation options, forgetting to mangle names or something, missing dead-tree elimination maybe?
Maybe sourcemapping?
[+] [-] brainless|5 years ago|reply
[+] [-] Olreich|5 years ago|reply
For size of the bundle, just remember the node_modules black hole meme. The amount and size of JS libraries is no joke, it's wildly out of control. 2.5MB minified non-gzipped is common mostly because of a paradigm of "once you gzip it it will be small, and inflating that doesn't cost anything, and this way everything is preloaded!". Libraries come with a bunch of images embedded as Base64 encoded strings, the full localization tables, 40 1kb depedencies (left pad and friends), etc. etc. etc. These are all the default and no one changes defaults.
To make a reasonable web application today without all the insanity, you have to be very disciplined, because everything is pointing you toward doing dumb or crazy things.
[+] [-] hliyan|5 years ago|reply
javascript fatigue:
longing for a hypertext
already in hand
What are we doing? State management libraries? Hypermedia is the engine of application state. Send data to client as HTML and have the client apply styles over it, rather than converting from JSON to local objects and storing in some sort of reactive data store. You will find that semantically structured HTML is almost as economical as JSON.
[+] [-] onion2k|5 years ago|reply
React is not "bad for users". Developers build complex, fast apps with React all the time. It can be fast, but if you make mistakes with it then it's easy to make something very, very slow.
The app I work on is huge. It's ~8MB of uncompressed React + Redux in development (much smaller in production though), and pages include a lot of assets so they can weigh in at 23MB in the worst cases. A complex page can have up to 60,000 DOM nodes under React's control with thousands of event listeners. It starts in about 3s and never drops below 60fps.
[+] [-] lgl|5 years ago|reply
Can one really make such affirmations regarding client rendered web apps? I'm assuming these numbers aren't solely measured on localhost in some state of the art development machine/device so won't it depend on the client's machine specs, browser, usage, bandwidth, etc?
[+] [-] Davertron|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] voldacar|5 years ago|reply
[deleted]
[+] [-] astura|5 years ago|reply
[+] [-] swyx|5 years ago|reply
idk if this is fair. OP was using some pretty niche tools (clojure) whereas best practice React metaframeworks like Nextjs may have addressed some of those pagespeed issues.
additionally, I highly doubt that the author has replicated this functionality by using his new turbolinky framework.
So the post is better titled "I improved webpage speed by throwing away React AND a bunch of UI requirements". which is fair dinkum, but less exciting.
[+] [-] mercer|5 years ago|reply
But it does illustrate how React isn't a magical solution to the front-end woes and how complicated the whole thing still is to do right.
I'd still use React for projects, but for now I've been incredibly happy with the LiveView solution that Phoenix/Elixir offers (or the variants for other frameworks. Blazor for C#, LightWire for Laravel/PHP?).
It's surprising how often I'll work on something and realize that the solution is quite simple now, where before it would definitely mean some serious thinking.
For example: libraries. I remember so many projects where I needed do do some date formatting or manipulation. Moment.js was the obvious solution, but including the whole library was not an option.
With LiveView I can just pull in whatever dependency I want, because it's all server-side. It's only the markup diff for the specific component that gets sent down the wire.
Or security. I need to show a user, but only a 'friend' can see the email address (or other profile details). The 'old-fashioned' way would involve separate API calls and a certain nervousness that perhaps I might end up in a situation where the front-end behaves how I expect, but the API calls somehow expose non-friend data.
With LiveView I can just add a conditional statement to the view, and since the resulting HTML is all that does to the client, I'm done!
Of course this only works with a persistent and relatively low-latency connection, but I can't remember the last time I worked on a project where this wasn't an implicit assumption.
I'm perfectly happy using React/Next/Vue when necessary, but it's a really strange experience to read these kinds of articles and threads these days when for so much of my day to day these problems just went away.
[+] [-] madeofpalk|5 years ago|reply
I’m also exceptionally confused why the Pagespeed score was sitting at just 5/100 - it doesn’t sound like the dev team actually understood the result and attempted to resolve it.
It sounds like the author has found some new technology they’re happy with for now, but it sounds like all the previous problem were self inflicted and they’re bound to repeat them again.
While different tech and frameworks has an influence in the problems you’ll face, you’ve still got to do good programming.
[+] [-] _benj|5 years ago|reply
All that I’m saying is that, as opposed to some other opinionated frameworks, react let’s you do pretty much whatever you want, which is great for speed development but can also exhausting in the huge amount of decisions that need to be made.
[+] [-] lf-non|5 years ago|reply
So, the problems here may have had more to do with the clojurescript stack (which I am not much familiar with), or author's lack of familiarity with javascript optimization strategies than react, SPA model or client side rendering.
[+] [-] llimos|5 years ago|reply
It seems everyone uses <body><div id="app-root"></div></body> but React is perfectly happy being scattered around the page.
It's just a thought, I haven't tried it myself yet - has anyone else?
[+] [-] catbuttes|5 years ago|reply
We have put together a small collection of fairly generic react apps that mean we can put one on a page, put in some config and have it displaying a report generated in SQL in about as long as it takes to write the SQL for the report. We have used this approach for generic SQL reports, SharePoint lists, imgage retrival and entry forms. Having the library of options to hand has meant we can confidently assemble a new (fairly complex) page much quicker than previously - and be confident that it will work as it is reusing known good components.
[+] [-] CaptArmchair|5 years ago|reply
For instance, you could build an e-commerce shop as an SPA, or you could target specific parts that process user interaction dynamically / fluidly - such as the checkout process, adding to a cart,... - and consider those as separate applications.
There are also trade offs. Search and navigation, for instance. You could build an SPA for an entire search engine. But then you may end up doing heavy lifting, like dynamically managing URL state through routing components, which is something browsers already do themselves: the only gain being that you don't reload the entire page.
So, the big question boils down to: what are you really trying to solve? A UI/UX problem? A performance problem? A maintenance problem? Something else? And who are the stakeholders, who's using the stuff you're gonna build? What are their intentions and motives?
That's when you come, to a conclusion: there's no silver bullet. The architectural design you choose needs to be an informed choice above anything else. And it should be informed by your specific context rather then the affordances provided by the tools at your disposal.
The hard part is sitting down and taking a bit of time up front to think and articulate an argument that, given your context, validates choosing a particular strategy. (Personally, I tend to sit back and stare at the ceiling with my notebook and a pencil, but that's just me.)
[+] [-] sovietmudkipz|5 years ago|reply
New features were still desired so we built/repurposed react/redux components into what we called “hybrid” pages. It worked great for us!
[+] [-] ncrmro|5 years ago|reply
I and some other built while i was at Poetic in Houston, Tx. https://www.houstonfoodbank.org/find-help/agency-locator/
[+] [-] dsego|5 years ago|reply
[+] [-] renke1|5 years ago|reply
[+] [-] dreamcompiler|5 years ago|reply
Then I realized Javascript could do lexical closures which meant the client could keep track of the targets, which also meant the server no longer had to care about things servers shouldn't care about.
The next realization was that DIV IDs are global variables and thus in most cases a bad idea, so now my event handlers automatically search upward (and sometimes sibling-wise) from 'this' for DIVs matching classes or other selectors, to keep the scope local.
This TwinSpark library seems like the next iteration of all the good ideas, plus even more flexibility and proper separation of concerns, without JQuery or other dependencies.
Bravo!
[+] [-] rawoke083600|5 years ago|reply
In the past I have used Angular and React - these days I just always reach for Svelte (speed and simplicity).
[+] [-] skc|5 years ago|reply
[+] [-] leeman2016|5 years ago|reply
Smaller bundles, faster, easier to use ... everything one can ask for
[+] [-] jevgeni|5 years ago|reply
This: https://github.com/ryansolid/solid
[+] [-] griffiths|5 years ago|reply
I would also recommend to anyone to read his blogs about different approaches to client side rendering, and the technology behind Solid. It is a really good read regardless of the choice of using the library.
[+] [-] theobeers|5 years ago|reply
[+] [-] RedShift1|5 years ago|reply
[+] [-] Demiurge|5 years ago|reply
Whenever I read this, I usually take it the author is being serious. However, how long does it actually take to load a 88kb library these days, is it really 'monstrous'? If this was the authors top performance drain based on profiling, I commend their technical abilities.
[+] [-] piranha|5 years ago|reply
And a "monster" is sarcasm referring to that React bundle I described before.
[+] [-] leon_sbt|5 years ago|reply
1) I wasn't code splitting by page, and had a couple heavy dependencies (which made things worse)
2) I wasn't pre-rendering my create-react app via react-snap. Meaning during the CI build react-snap runs Puppeteer visits each page on the site, and generates ready-to-go html versions of each page.
Those two changes took me from a 7/100 to a 95+/100 in short order. Makes logical sense too. Now a days the site hovers at 85/100. But I don't have the time right now to reinvestigate it.
With the options at the time, I'm happy with my tech choices. If I had to start with React again today. I would do Next.JS with a static build output. Would save me a bunch of time scaffolding things.
[+] [-] ForHackernews|5 years ago|reply
[0] https://github.com/turbolinks/turbolinks
[+] [-] mariopt|5 years ago|reply