I don't do web stuff at all, but I really enjoyed this article. I am convinced that software engineers (not to mention others) have thrown the baby out with the bathwater in our brave new world of 32GB memories and fibre-optics. By all means the generous hardware capabilities let us do amazing things, like have a video library, or run massive climate computations, but mostly those resources are piddled away in giant libraries that provide little or no actual functional value.
I don't really pine for the days of the PDP-8, when programmers had to make sure that almost every routine took fewer than 128 words, or the days of System/360, when you had to decide whether the fastest way to clear a register was to subtract it from itself or exclusive-or it with itself. We wasted a lot of time trying to get around stringent limitations of the technology just to do anything at all.
I just looked at the Activity Monitor on my Macbook. Emacs is using 115MB, Thunderbird is at 900MB, Chrome is at something like 2GB (I lost track of all the Renderer processes), and a Freecell game is using 164MB. Freecell, which ran just fine on Windows 95 in 8MB!
I'm quite happy with a video game taking a few gigabytes of memory, with all the art and sound assets it wants to keep loaded. But I really wonder whether we've lost something by not making more of an effort to use resources more frugally.
An addendum...Back in the 1960s, IBM didn't grok time-sharing. When MIT/Bell Labs looked for a machine with address translation, IBM wasn't interested, so GE got the contract. IBM suddenly realized that they had lost an opportunity, and developed their address translation, which ended up in the IBM 360/67. They also announced an operating system, TSS/360, for this machine. IBM practice was to define memory constraints for their software. So Assembler F would run on a 64K machine, Fortran G on a 128K machine, and so on. The TSS engineers asked how much memory their components were given. They were told “It's virtual memory, use as much as you need.” When the first beta of TSS/360 appeared, an attempt to log in produced the message LOGON IN PROGRESS...for 20 minutes. Eventually, IBM made TSS/360 usable, but by then it was too late. 360/67s ended up running VM/CMS, or 3rd party systems: I had many happy years using the Michigan Terminal System.
Remember, there's a gigabit pathway between server and browser, so use as much of the bandwidth as you need.
At my deathbed, I’m not sure if I’ll be able to forgive our industry for that. I grew up in the 3rd world where resources where extremely expensive, so my early career was all about doing the most with the resources I had. It was a skill that I had honed so well and now it feels useless and unappreciated. With higher interest rates we see a small degree of it again, but I’m doubtful that hiring managers without that experience will be able to identify it on the wild to pick me.
> I really wonder whether we've lost something by not making more of an effort to use resources more frugally
I'll bite. What do you think we've lost? What would the benefit be of using resources more frugally?
Disclosure: I'm an embedded systems programmer. I frequently find myself in the position where I have to be very careful with my usage of CPU cycles and memory resources. I still think we'd all be better off with infinitely fast, infinitely resourced computers. IMO, austerity offers no benefit.
> But I really wonder whether we've lost something by not making more of an effort to use resources more frugally.
On the desktop we definitely lost responsiveness. Many webpages, even on the speediest, fatest, computer of them all are dog slow compared to what they should be.
Now some pages are fine but the amount of pigs out there is just plain insane.
I like my desktop to be lean and ultra low latency: I'll have tens of windows (including several browsers) and it's super snappy. Switching from one virtual workspace to another is too quick to see what's happening (it take milliseconds and I do it with a keyboard shortcut: reaching for the mouse is a waste of time).
I know what it means to have a system that feels like it's responding instantly as I do have such a system... Except when I browse the web!
And it's really only some (well, a lot of) sites: people who know what they're doing still come up with amazingly fast websites. But it's the turds: those shipping every package under the sun and calling thousands of micro-services, wasting all the memory available because they know jack shit about computer science, that make the Web a painful experience.
And although I use LLMs daily, I see a big overlap between those having the mindset required to produce such turds and those thinking LLMs are already perfectly fine today to replace devs, so I'm not exactly thrilled about the immediate future of the web.
P.S: before playing the apologists for such turd-sites, remember you're commenting on a very lean website. So there's that.
The main thing I remember from an usability book by Jakob Nielsen is that web pages should fit in 50kb, including all elements. Managing to do this in only 2x that size today, considering that his book was from 1999, may be considered a merit.
To put this into another context, today there was a post about Slack's 404 page weighting 50Mb.
> As I often point out to teams I’m working with, the original 1993 release of DOOM weighed in at under 3MB, while today we routinely ship tens of megabytes of JavaScript just to render a login form. Perhaps we can rediscover the power of constraints, not because we have to, but because the results are better when we do.
Emphasis mine, and tying with how it opened with the story about the designer who believed accessibility and "good design" are at odds (I'm screaming inside).
Making a website is what made me interested in programming as a 10 year old during the dot com bubble. Even back then I realized very quickly that webdev is a cargo cult and I switched to C and assembly to learn how to program “real” programs. Even now, almost 30 years later, I can make high quality software based on technology from back then (compared to the constant dependency drift nowadays in webdev). It’s just a constant assault by young programmers who don’t know better in webdev.
Some years ago I made a website again. Screw best practices, I used my systems engineering skills and the browser’s debugger. I had written game engines with soft realtime physics simulations and global illumination over the network. I knew what computers could do. This website would render within 1 frame at 60 FPS without any layout recalculation, garbage collection events, web requests that can’t be parallelized without dependencies etc.
I showed it to friends. They complained it doesn’t work. They didn’t realize that once they clicked, the site displayed the next content instantly (without any weird js tricks). This was a site with a fully responsive and complex looking design. The fact that users are SO used to terrible UX made me realize that I was right about this industry all along as a child.
Curious why, paywall? Genuinely asking as I have a blog on there, guess it's lazy not to host it myself. It is funny when your mostly un-read blog suddenly is graced by medium and it out of nowhere gets thousands of hits.
I'm about halfway through the read so far, but wanted to vome back and say that these were/are some of most interesting challenges to overcome and constraints to work within, when I was coming up as a web person. Inspired by agencies like Clear Left, I'd seek out old af devices with comically bad browsers, tiny screens, and obtuse input methods. Unfortunately I never really found a financially rewarding enough path to continue pursuing that; the constraint on most projects that don't have these super tight requirements baked in, is money, time, and looking good, which meant that I could either accept that as a survival mechanism and throw JS at CSS problems, or I could lose my job. Although it's neat that over the years of my incredibly shaky career, I've moved from web designer/developer to f̵a̵k̵e̵ software engineer, I've never found UI programming to be quite as rewarding as making an incredibly fast and responsive and pretty and robust website.
I really enjoyed the article. I have to say, though: sorry, not sorry, but application size is a poor measure of performance. A 128KB size limit doesn't account for pictures, videos, tracking, ads, fonts, and interactivity. Just avoid them, is not a real world strategy.
Suggesting that an application should stay within a 128KB limit is akin to saying I enjoy playing games in polygon mode. Battlezone was impressive in the 90s, but today, it wouldn't meet user expectations.
In my opinion, initial load time is a better measure of performance. It combines both the initial application size and the time to interactivity.
Achieving this is much more complex. There are many strategies to reduce initial load size and improve time to interactivity, such as lazy loading, using a second browser process to run code, or minimizing requests altogether. However, due to this complexity, it's also much easier to make mistakes.
Another reason this is often not done well is that it requires cross-team collaboration and cross-domain knowledge. It necessitates both frontend and backend adjustments, as well as optimisation at the request and response levels. And it is often a non-functional requirement like accessibility that is hard to track for a lot of teams.
It's not about performance, it's about load time and the restrictions of your client apps.
Also, you're thinking way too much in a SPA architecture. Using just server side rendering with just a tiny bit of javascript like the article states removes most of the problems you describe like Initial load time and cross team collaboration. The load time of the described websites would be instant, and there is no front end team needed.
Maybe i'm dumb, but I really don't understand the point of this post.
Why even make it "reactive"? Just make your site static server-rendered pages? Or just static pages. Is it because additional-content-loading is something users expect?
"Write your site in plain javascript and html. Don't use a framework. Write some minimal css. Bamo. Well under 128kb." ???
at least in this case one of the ideas seemed to be that if they did an ajax load of the middle section of the page, they could skip sending the fixed elements (header and footer) over the network repeatedly
Firefox reader mode works ok for me. Chromium tells me there was 2.5mB of downstream traffic to load the page, rising to 4.3 if I scroll to the Recommended from Medium spam at the bottom of the page. That would be appalling if the bar weren't so low.
I'm reminded of The Website Obesity Crisis, [0] where the author mentions reading an article about web bloat, then noticing that page was not exactly a shining example of lightweight design. He even calls out Medium specifically.
vincent-manis|7 months ago
I don't really pine for the days of the PDP-8, when programmers had to make sure that almost every routine took fewer than 128 words, or the days of System/360, when you had to decide whether the fastest way to clear a register was to subtract it from itself or exclusive-or it with itself. We wasted a lot of time trying to get around stringent limitations of the technology just to do anything at all.
I just looked at the Activity Monitor on my Macbook. Emacs is using 115MB, Thunderbird is at 900MB, Chrome is at something like 2GB (I lost track of all the Renderer processes), and a Freecell game is using 164MB. Freecell, which ran just fine on Windows 95 in 8MB!
I'm quite happy with a video game taking a few gigabytes of memory, with all the art and sound assets it wants to keep loaded. But I really wonder whether we've lost something by not making more of an effort to use resources more frugally.
vincent-manis|7 months ago
Remember, there's a gigabit pathway between server and browser, so use as much of the bandwidth as you need.
kassner|7 months ago
HeyLaughingBoy|7 months ago
I'll bite. What do you think we've lost? What would the benefit be of using resources more frugally?
Disclosure: I'm an embedded systems programmer. I frequently find myself in the position where I have to be very careful with my usage of CPU cycles and memory resources. I still think we'd all be better off with infinitely fast, infinitely resourced computers. IMO, austerity offers no benefit.
TacticalCoder|7 months ago
On the desktop we definitely lost responsiveness. Many webpages, even on the speediest, fatest, computer of them all are dog slow compared to what they should be.
Now some pages are fine but the amount of pigs out there is just plain insane.
I like my desktop to be lean and ultra low latency: I'll have tens of windows (including several browsers) and it's super snappy. Switching from one virtual workspace to another is too quick to see what's happening (it take milliseconds and I do it with a keyboard shortcut: reaching for the mouse is a waste of time).
I know what it means to have a system that feels like it's responding instantly as I do have such a system... Except when I browse the web!
And it's really only some (well, a lot of) sites: people who know what they're doing still come up with amazingly fast websites. But it's the turds: those shipping every package under the sun and calling thousands of micro-services, wasting all the memory available because they know jack shit about computer science, that make the Web a painful experience.
And although I use LLMs daily, I see a big overlap between those having the mindset required to produce such turds and those thinking LLMs are already perfectly fine today to replace devs, so I'm not exactly thrilled about the immediate future of the web.
P.S: before playing the apologists for such turd-sites, remember you're commenting on a very lean website. So there's that.
gmuslera|7 months ago
To put this into another context, today there was a post about Slack's 404 page weighting 50Mb.
toss1|7 months ago
Evidently, the entire concept of size & communications efficiency has been abandoned
[0] https://www.the5k.org/about.php
dmitrygr|7 months ago
mikehall314|7 months ago
andrepd|7 months ago
Emphasis mine, and tying with how it opened with the story about the designer who believed accessibility and "good design" are at odds (I'm screaming inside).
eska|7 months ago
Some years ago I made a website again. Screw best practices, I used my systems engineering skills and the browser’s debugger. I had written game engines with soft realtime physics simulations and global illumination over the network. I knew what computers could do. This website would render within 1 frame at 60 FPS without any layout recalculation, garbage collection events, web requests that can’t be parallelized without dependencies etc.
I showed it to friends. They complained it doesn’t work. They didn’t realize that once they clicked, the site displayed the next content instantly (without any weird js tricks). This was a site with a fully responsive and complex looking design. The fact that users are SO used to terrible UX made me realize that I was right about this industry all along as a child.
beej71|7 months ago
But, damn, that was some fun stuff. Really challenging to get the graphical results we wanted and keep it under budget (15 KB in the early days).
It's really satisfying.
nmilo|7 months ago
hooverd|7 months ago
unknown|7 months ago
[deleted]
ge96|7 months ago
brailsafe|7 months ago
mft_|7 months ago
analog31|7 months ago
-- Feynman
exiguus|7 months ago
Suggesting that an application should stay within a 128KB limit is akin to saying I enjoy playing games in polygon mode. Battlezone was impressive in the 90s, but today, it wouldn't meet user expectations.
In my opinion, initial load time is a better measure of performance. It combines both the initial application size and the time to interactivity.
Achieving this is much more complex. There are many strategies to reduce initial load size and improve time to interactivity, such as lazy loading, using a second browser process to run code, or minimizing requests altogether. However, due to this complexity, it's also much easier to make mistakes.
Another reason this is often not done well is that it requires cross-team collaboration and cross-domain knowledge. It necessitates both frontend and backend adjustments, as well as optimisation at the request and response levels. And it is often a non-functional requirement like accessibility that is hard to track for a lot of teams.
thesuavefactor|7 months ago
Also, you're thinking way too much in a SPA architecture. Using just server side rendering with just a tiny bit of javascript like the article states removes most of the problems you describe like Initial load time and cross team collaboration. The load time of the described websites would be instant, and there is no front end team needed.
soundofvictory|7 months ago
Why even make it "reactive"? Just make your site static server-rendered pages? Or just static pages. Is it because additional-content-loading is something users expect?
"Write your site in plain javascript and html. Don't use a framework. Write some minimal css. Bamo. Well under 128kb." ???
zem|7 months ago
garbuhj|7 months ago
MaxBarraclough|7 months ago
I'm reminded of The Website Obesity Crisis, [0] where the author mentions reading an article about web bloat, then noticing that page was not exactly a shining example of lightweight design. He even calls out Medium specifically.
[0] https://idlewords.com/talks/website_obesity.htm , discussed https://news.ycombinator.com/item?id=34466910
winrid|7 months ago