(no title)
diek | 4 years ago
The fact that they don't make a reference like, "hey, ya know, how _everything_ worked just a few years ago" tells me they think this is somehow a novel idea they're just discovering.
They then go on to describe a convoluted rendering system with worker processes and IPC... I just don't know what to say. They could have built this in Java, .Net, Go, really any shared memory concurrency runtime and threading, and would not run into any of these issues.
rtpg|4 years ago
How things worked a few years ago: you wrote SSR pages with one set of tools (like Django Template Language), then hooked into it with another set of tools. If your pages are complex enough, you end up with weird brittleness because the "initial page load" is not handled the same way as modifications of that page.
Now it's much closer to using the same set for the initial load and subsequent edits. This is a net win for people working on the frontend, in theory.
The more nuanced thing is that frontend tooling is so lacking in terms of performance, despite being something that theoretically should work very fast. In particular, having a bunch of language tooling written in Javascript is the JS ecosystem's billion dollar mistake IMO.
manigandham|4 years ago
Server-side web frameworks even have modern component-based UI templating now, and features like maps can be layered on top as progressive enhancement without this bloated frontend mess.
jaywalk|4 years ago
skohan|4 years ago
It is funny how you can get away with using the wrong tool for the job in Software in a professional environment. Like imagine if you were hired to build a house, and you decided to build the foundation out of modeling clay because that's what you're used to working with. And then you started to come up with novel methods of hardening modeling clay when it proved not to be fit for purpose.
I guess you can get away with it in software because these kinds of decisions normally only manifest as increased server costs, or moments of users' lives lost to performance issues, which are much less evident to the outside observer.
southerntofu|4 years ago
- sourced from local material (use a different type of clay or straw if you need), no need for chemicals or for sand imports from depleted beaches on the other side of the world - recyclable to almost infinity (need a bigger/better house? just tear it down and reuse the materials) - cool in summer, warm in winter if you design it well - lasts for decades if your structure is designed well: i'm not aware of really old examples but it would be a surprise if the structure outlives us all (does someone have resources on this topic?)
So turns out your example was more interesting and less absurd than you originally thought. Just like server-side rendering uses an order of magnitude less resources (for n clients it's O(1) with caching, whereas client-side rendering is O(n)), it turns out clay is the perfect material to use an order of magnitude (or even more orders?) less resources to build your house than if you used concrete source from various polluting industries and endangered sand deposits.
oefrha|4 years ago
> After a string of production incidents in early 2021, we realized our existing SSR system was failing to scale as we migrated more pages from Python-based templates to React.
Apparently their old server-rendered Python app (not called SSR though for obvious reasons) was scaling just fine before the migration.
anentropic|4 years ago
> We evaluated several languages when it came time to implement the SSR Service (SSRS), including Python and Rust. It would have been ideal from an internal ecosystem perspective to use Python, however, we found that the state of V8 bindings for Python were not production ready
Wow, that is some convoluted architecture
It seems like the problem here is React
ksec|4 years ago
And for some strange reason this mostly happens to Web Development in general.
danjac|4 years ago
2. Watch potential competitors chase their tails & burn budgets adopting hot new thing
3. Quietly drop or sideline solution a few years later
aaronbrethorst|4 years ago
wil421|4 years ago
a_c|4 years ago
I think it is because a lot of tutorials nowadays talk about how to do X with tool Y, without telling the historical context as to why Y is in use in the first place. Usually the tool is to solve a specific problem in mind. When people discover that the tool doesn't solve their very own problem, workarounds based on the very same tool is devised.
Other common examples are kubernetes and microservices. I have seen startup jumped onto the bandwagon before having real customers where the tool in question is meant for scalability purpose
0des|4 years ago
unknown|4 years ago
[deleted]
pjmlp|4 years ago
Really? A technique invented for JavaScript?!?
ec109685|4 years ago
Instead, this blog post shows they made small changes that resulted in a much better performing site with less server resources needed.
The idea of dropping server side rendering if the site is temporary overloaded is a good one that you can’t do if built in Java, .Net and Go.
A fork exec web server is not convoluted.
mrjin|4 years ago
bob1029|4 years ago
Getting the desired plaintext documents across the network has never been such a clusterfuck in my experience.
hunterb123|4 years ago
SSR improves the user experience for first time use of those apps and enables SEO.
What you say is subjective but I somewhat agree with your opinion, but only for web sites, not web apps.
Overall I agree Yelp should have stuck with the existing system as their product functions as a site.
southerntofu|4 years ago
Any .html page works offline. You don't need any JS or framework for that. If your JS-powered page doesn't work offline, it means either it requires online connectivity to solve problems, or its badly designed and does not respect "progressive enhancement" principles.
> SSR improves the user experience for first time use of those app
SSR improves UX for everyone. Seriously most "web" pages these days takes dozens of seconds to load and use a non-negligible percentage of our CPU/RAM. If you want to know what real-world conditions look like for literally over a billion people, run tests from a core2duo (or a similar VM) with 2GB RAM with simulated 10% packet loss and 1Mbit/s bandwidth.
mrweasel|4 years ago
Couldn't you built a native application and NOT have this problem? It seems like yet another self inflicted problem.
LAC-Tech|4 years ago
Right, they'd have brand new issues because they were dealing with shared memory.
manigandham|4 years ago
JS doesn't have an advantage here, it's just limited to being single-threaded.
tonetheman|4 years ago
SSR ... you mean like PHP did it ... and most all of the tech did it before.
How did this even make it to Hnews...