Fully agree with the premise. Most youngish users have never even experienced true performance. Our software stacks are shit and rotten to the core, it's an embarrassment. We'll have the smartest people in the world enabling chip design approaching the atomic level, and then there's us software "engineers" pissing it away with an inefficiency factor of 10 million %.
We are very, very bad at what we do, yet somehow get richly rewarded for it.
We've even invented a new performance problem: intermittent performance. Performance isn't just poor, it's also extremely variable due to distributed computing, lamba, whichever. So users can't even learn the performance pattern.
Where chip designers move heaven and earth the move compute and data as closely together as is physically possible, leave it to us geniuses to tear them apart as far as we can. Also, leave it to us to completely ignore parallel computing so that your 16 cores are doing fuck all.
You may now comment on why our practices are fully justified.
The reason is that the money people don't want to pay for it. We absolutely have the coding talent to make efficient, maintainable code, what we don't have are the available payroll hours.
10x every project timeline and it's fixed, simple as.
Granted, the big downside is, you have to keep your talent motivated and on-task 10x as long, that's like turning a quarter horse into a plough horse, it's not likely to happen quickly, if at all. You'd really need to start over with the kids who are in highschool now writing calculator apps in python by making them re-write them in C and grade them on how few lines they use.
ie, it's a pipe dream, and will continue to be until we run out of hardware capability, which has been "soon" for the last 30 years, so don't hold your breath.
Labour productivity vs wages doubled since the 1970s and the trend seems to be continuing. It was about 150% in 2000 so we can use Excel as as the benchmark.
This means that an accountant today can wait for Excel to load for 2 whole hours of their 8 hour shift, and still be as productive as an accountant from 20 years ago!
Isn't that amazing! Technology is so cool, and our metrics for defining economic success are incredible.
People give push back when I tell them they should drop PHP for Go or Python for Rust.
It doesn't matter that it would be better for everyone and the planet. It's a prisoners dilemma. I only get rewarded and promoted for shipping stuff and meeting deadlines even if the products are slow.
Thank God for open source. Programmers produce amazing libraries, frameworks, languages and systems when business demands and salary is out of the picture.
I spent years designing and implementing a new data management system where speed was its main advantage. I painstakingly wrote performant code that relied on efficient algorithms that could manage large amounts of data (structured or unstructured). I tried to take advantage of all the hardware capabilities (caching, multiple threads, etc.).
I mistakenly thought that people might flock to it once I was able to demonstrate that it was significantly faster than other systems. I couldn't have been more wrong. Even database queries that were several times faster than other systems got a big yawn by people who saw it demonstrated. No one seemed interested in why it was so fast.
But people will jump on the latest 'fashionable' application, platform, or framework often no matter how slow and inefficient it is.
The wonderful thing is that the hardware performance is there if you just dare to toss the stack. One of my long-standing projects is to build a bare metal (at least as far as userspace is considered) programming language called Virgil. It can generate really tiny binaries and doesn't rely on hardly any other software (not even C!). As LLVM and V8 and even Go get piggier and piggier, it feels more magical every day.
> Most youngish users have never even experienced true performance
Most of them have never experienced the level of instability and insecurity of past computing either. Those improvements aren’t free. In the past, particularly with Windows (since this is what the videos recorded) it would be normal for my computer to freeze or crash every other hour. It happens much less today
I have absolutely no idea what you are talking about "true performance" - maybe you are from an alternate reality? I don't think of the 2000s as "true performance" but rather crashes and bugs and incredibly slow loading times for tiny amounts of data.
I can grep through a gigabyte text file in seconds. I couldn't even do that at all back in 2000.
It's one of the things that are infuriating for technical folks but meh for everybody else.
In the days of programs taking forever to load or web pages downloading images slowly we knew what technical limitations were there.
Now we know how much sluggishness comes from cruft, downright hostility (tracking), sloppy developer work, etc.. we know this is a human problem and that angers us.
But for non-technical people it hasn't changed too much. Computing wasn't super pleasant 25 years ago and it's not now. Instead of waiting half a minute for Word to load they wait 4-5 seconds for Spotify to load. They were never interested in Notepad or Cmd or Paint. It doesn't bother them that now they open slower than in 1999.
There is no mention in this article on the effect of modern security measures on desktop app latency. I'm thinking about things like verifying the signatures of binaries before launch (sometimes this requires network roundtrips as on macOS). Also, if there are active security scans running in the background, this will eventually effect latency, even if you have lots of efficiency cores that can do this in the background (at some point there will be IO contention).
Another quibble I have with the article is the statement that the visual effects on macOS were smoothly animated from the start. This is not so. Compositing performance on Mac OS X was pretty lousy at first and only got significantly better when Quartz Extreme was released with 10.2 (Jaguar). Even then, Windows was much more responsive than Mac OS X. You could argue that graphics performance on macOS didn't get really good until the release of Metal.
Nowadays, I agree, Windows performance is not great, and macOS is quite snappy, at least on Apple Silicon. I too hope that these performance improvements aren't lost with new bloat.
I don't think all this should be relevant - once it was launched at least one time, the OS should be able to tell that the binary hasn't changed, and thus signature verification is not necessary.
The comparison is using OS X 10.6 - I used to daily drive it and it was pretty snappy on my machine - which is corroborated by this guy's Twitter video capture.
As for Windows performance - Notepad/File Explorer might be slower than strictly necessary, but one of the advantages of Windows' backwards compatibility, is that I keep using the same stuff - Total Commander and Notepad++, that I used from the dawn of time, and those things haven't gotten slow (or haven't changed lol).
Even on an M2 Mac Spotlight takes a second or two to appear after you hit the shortcut. Apple notes takes an absurd 8 seconds to start. Apple music and Spotify are also take seconds to start. Skype takes 10 seconds.
I'm very happy with my M2 Mac. It's a giant leap forward in performance and battery life. But Electron is a giant leap backward of similar magnitude.
The comparison between the win32 Notepad and the UWP version is telling, though, on the same hardware, and with the same security constraints. Similar between the old (Window 7) calculator and the newer one.
I'm glad you mentioned this in the comments - I was wondering if they were going to touch how applications are sandboxed and everything. I would imagine that is a large part of current 'sluggishness'.
I run a machine with lots of RAM and a hefty CPU and a monster GPU and an SSD for my Linux install and...a pair of HDD's with ZFS managing them as a mirror.
Wat. [1]
I also have Windows on a separate drive...that is also an HDD.
Double wat.
My Linux is snappy, but I also run the most minimal of things: I run the Awesome Window Manager with no animations. I run OpenRC with minimal services. I run most of my stuff in the terminal, and I'm getting upset at how slow Neovim is getting.
But my own stuff, I run off of ZFS on hard drives, and I'll do it in VM's with constrained resources.
Why?
While my own desktop has been optimized, I want my software to be snappy everywhere, even on lesser machines, even on phones. Even on Windows with an HDD.
This is my promise: my software will be far faster than what companies produce.
There is a Raymond Chen post (I'll come back to add the link if I find it) that explains how the developers working on Windows 95 were only allowed to use machines with the minimum specs required by the OS. This was to ensure that the OS ran well on those specs.
And, IMHO, that's the way it should be: I think it's insane(?) to give developers top-of-the-line hardware, because such hardware is not representative of the user population... and that's part of why I stick to older hardware for longer than others would say is reasonable.
Fast software that nobody uses helps nobody. Slow software that everyone uses is, well, slow. So I'm curious: how many people use your software? I get that this topic is like nerd rage catnip but if people actually want to help users, then they need to meet users where they're at. And if "marketing" is what's needed, then maybe it is. Software is generally built for humans after all.
Don't get me wrong, All of my servers at home run Void Linux and use runit. Pretty much anything that runs on them is snappy and they run on 10 year old hardware but still sing because I use software written in Go or native languages. But remembering the particulars about runit services and symlinks is something I forget every 3 months between deploying new services. Trying to troubleshoot the logger is also a fun one where I remember then forget a few months later. Using systemd, this all just comes for free. Maybe I should write all of this down but I'm doing this for fun aren't I?
The reason users don't care that much about slow software is because they use software primarily to get things done.
I think there’s an important additional factor, which is how dynamic so much UI is these days. So much is looked up at runtime, rather than being determined at compile time or at least at some static time. That means you can plug a second monitor into your laptop and everything will “just work”. But there is no reason it should take a long time to start system settings (an example from the article) as the set of settings widgets doesn’t change much — for many people, never — and so can be cached either the first time you start it or precached by a background process. Likewise a number of display-specific decisions can be made, at least for the laptop’s screen or phone’s screen, and frozen.
> Linux is probably the system that suffers the least from these issues as it still feels pretty snappy on modest hardware. […]. That said, this is only an illusion. As soon as you start installing any modern app that wasn’t developed exclusively for Linux… the slow app start times and generally poor performance show up.
This is not an illusion. Cross-platform programs suck, so everyone avoids them, right? Electron apps and whatnot are universally mocked. You would only use one for an online services like Spotify or something. The normal use case is downloading some nice native code from your repo.
These kinds of realizations have made me look into permacomputing [1], suckless [2] and related fields of research and development in the past few months.
We should be getting so much more from our hardware! Let's not settle for software that makes us feel bad.
Mind you, the software we had 20 years ago was fully featured, smaller and faster than what we have today (orders of magnitude faster when adjusted for the increase in hardware speed).
Suckless, on the other hand, is impractical esthetic minimalism that removes "bloat" by removing the program. I'd rather run real software than an art project.
If you want more from your hardware, the answer is neither the usual bloatware, nor Suckless crippleware.
It's funny, the older I get the less I care about this stuff. I like to use technology to, well, live my life in a more effective, effort-free manner. I have lots of friends who aren't other nerdy techies. In high school I refused to use "proprietary, inefficient WYSIWYG garbage that disrespected the user" like Word and typed up all of my essays in LaTeX instead. Now I get accounting spreadsheets for vacations going on my smartphone using Google Sheets. I still love writing code but my code has become more oriented around features and experiences for myself rather than code for the sake of code.
I love exploring low-latency code but rather than trying to create small, sharp tools the suckless way, I like to create low latency experiences end-to-end. Thinking about rendering latency, GUI concurrency, interrupt handling, etc. Suckless tools prioritize the functionality and the simplicity of code over the actual experience of using the tool. One of my favorite things to do is create offline-first views (which store things in browser Local Storage) of sites like HN that paper over issues with network latency or constrained bandwidth leading to retries.
I find suckless and permacomputing to be the siren song of a type of programmer, the type of programmer who shows up to give a presentation and then has to spend 10 minutes getting their lean Linux distro to render a window onto an external screen at the correct DPI, or even to connect to the wifi using some wpa_supplicant incantations.
WTF happened to all the THOUSANDS AND THOUSANDS of machines I deployed to datacenters over the decades? Where are the F5 load balancers I spent $40,000 on per box in 1999?
I know that when we did Lucas' presidio migration, tens of million$ of SGI boxes went to the ripper. That sucks.
edit:
All these machines could be used to house a 'slow internet' or 'limited interenet'
Imagine when we graduate our wealth gap to include the information gap - where the poor only have access to an internet indexed to September 2021 - oh wait...
But really - that is WHAT AI will bring: an information gap: only the wealthy companies will have real-time access to information on the internet - and all the poor will have outdated information that will have already have been mined for its value.
think of how HFT network cards with insane packet buffers, and private fiber lines gave hedgies the microsecond advantages on trading stocks...
Thats basically what AI will power - the hyper-accelerated exploitation of information on the web via AI - but the common man, will be releagated to obsolete AI, while the @sama people of the world build off-planet bunkers.
I use Suckless terminal myself, but if I'm not mistaken it's actually not the fastest terminal out there, despite its simplicity[^1]. My understanding is that many LOCs and complex logic routines are dedicated to hardware/platform-specific optimizations and compatibility, but admittedly this type of engineering is well beyond my familiarity.
Also, OpenBSD's philosophy is very similar to Suckless. One of the more notable projects that come to mind is the `doas` replacement for `sudo`.
[^1]: This is based on Dan Luu's testing (https://danluu.com/term-latency/). I don't know when this testing was done but I assume a few years ago because I remember finding it before.
Yeah, totally possible to get excellent results with older hardware, and really stellar results with very new hardware, if you're running stuff that's not essentially made to be slow.
I basically only upgrade workstations due to web browsers needs, and occasionally because a really big KiCAD project brings a system to a crawl. At this point even automated test suite runtimes are more improved by fixing things that stop test parallelization from working efficiently vs. bigger hardware.
My impression of Suckless is that it’s “Unix philosophy” software where you edit the code and recompile instead of using dynamic configuration like all those config files. And while there are way too many ad hoc app-specific config systems out there, I don’t see how Suckless makes a huge difference for simplifying things.
As noted by another thread, the Notepad example is surprisingly telling.
My initial gut was to blame the modern drawing primitives. I know that a lot of the old occlusion based ideas were somewhat cumbersome on the application, but they also made a lot of sense to scope down all of the work that an app had to do?
That said, seeing Notepad makes me think it is not the modern drawing primitives, but the modern application frameworks? Would be nice to see a trace of what all is happening in the first few seconds of starting these applications. My current imagination is that it is something akin to a full classpath scan of the system to find plugins that the application framework supported, but that all too many applications don't even use.
That is, used to, writing an application started with a "main" and you did everything to setup the window and what you wanted to show. Nowadays, you are as likely to have your main be offloaded to some logic that your framework provided, with you providing a ton of callbacks/entrypoints for the framework to come back to.
> Rumor 1: Rust takes more than 6 months to learn – Debunked !
> All survey participants are professional software developers (or a related field), employed at Google. While some of them had prior Rust experience (about 13%), most of them are coming from C/C++, Python, Java, Go, or Dart.
> Based on our studies, more than 2/3 of respondents are confident in contributing to a Rust codebase within two months or less when learning Rust. Further, a third of respondents become as productive using Rust as other languages in two months or less. Within four months, that number increased to over 50%. Anecdotally, these ramp-up numbers are in line with the time we’ve seen for developers to adopt other languages, both inside and outside of Google.
> Overall, we’ve seen no data to indicate that there is any productivity penalty for Rust relative to any other language these developers previously used at Google. This is supported by the students who take the Comprehensive Rust class: the questions asked on the second and third day show that experienced software developers can become comfortable with Rust in a very short time.
"macOS ... the desktop switching animation is particularly intrusive, oh god how I hate that thing"
Why oh why won't Apple let you turn this off? It annoys if not nauseates so many people.
This is is just one example, but indicative of their mindset, why I dislike Apple, and use linux whenever I can. It's such a calming joy to switch instantly between desktops without the rushing-train effect.
I haven't paid any attention to the robotics world in ages and in the last six months I've discovered a bunch of interesting things people have been doing with less instead of more. Particular standouts are tiny maze-running robots, and classifications of fighting robots by weight. There's a guy with a bot named 'cheesecake' that has some interesting videos.
I think we could all do with celebrating the small. ESP32 and STM32 have hit a point where you can do modest computing tasks on them without having to become an embedded hardware expert to do so. I'm at one of those crossroads in my career and I'm trying to decide if I double down on a new web friendly programming language (maybe Elixir) or jump into embedded.
I've done a reasonable amount of programming in the small, several times tricked into it, and while it's as challenging if not moreso in the middle of doing the work, the nostalgia factor after the fact is much higher than most of the other things I've done.
Not that I disagree with the core point that some software gets worse over time, but I don't think it is valid to say that users have not benefitted from the ease of development afforded by Electron. Spotify has maintained its nominal price at $9.99 for 12 years, meaning its real price has fallen by one third. I don't know anything about Spotify or why they spend over a billion dollars in R&D each year, but if lack of attention to UI performance has helped cut their development costs then users might be benefitting through lower prices.
> I don't know anything about Spotify or why they spend over a billion dollars in R&D each year
Standups, one-on-ones, team fika, NIH, retros, agile retros, incident post mortems, cross-team fika, town halls, pool/billiards, table-tennis, and testing-in-prod.
A few years ago, I was working with a team that was trying to convert an entire API for a fairly straightforward application into REST api microservices.
The architect wanted to break everything up into extremely small pieces, small enough pieces that many were dependent on each other for every single call. Think "a unified address service" to provide an physical address via an identifier for anything that had an address (customers, businesses, delivery locations, etc).
The problem was that it turns out when you're looking up a customer or business, you always need an address, so the customer service needed to hit the address service every time.
Disregarding the fact that this whole thing was a stupid design, the plan was that when you hit the customer api, the customer code would make internal http calls to the address service, etc.
I pointed out that this was a ton of unnecessary network overhead, when all of this information was sitting in a single database.
The whole team's argument was effectively - "it's 2015, computers and networks are fast now, we don't need to worry about efficiency, we'll just throw more hardware at it".
The whole thing ended up being scrapped, because it was crippled by performance issues. I ended up rewriting the whole thing as a "macroservice" which was 60000% faster for some particularly critical backend processes.
Anyway ... I think that mentality is prevalent in a lot of people involved in technology creation, technology has improved so much, moore's law etc etc etc.
So let's not worry about how much memory this thing takes, or how much disk space this uses, or how much processing power this takes, or how many network calls. Don't worry about optimization, it's no big deal, look at how fast everything is now.
> a lot of the Windows shell and apps have been slowly rewritten in C#
I worked on the Shell team until late 2022. There is very little C#, if any at all.
The vast majority of the Windows Shell is still C++ with a significant amount of WinRT/COM.
What I had in mind when I wrote this sentence though is how the modern apps that people seem to like /are/ C#, such as Windows Terminal and PowerToys, and these feel quite slow to me. But, yeah, calling those the shell is a stretch.
This is the reason I like SumatraPDF. It has an old-fashioned user interface, a rather limited set of features, but boy it opens fast. I wish there were more apps in that mold.
It's clear that the biggest problem is companies prioritizing their cost, at the expense of user experience.
But after that, the biggest problem is clearly the framework/language you use. The maxim "premature optimization is the root of all evil" has done damage here. The problem with frameworks/languages is that by the time you finish your features and profile your code, you're already doomed. There's no way to speed up the entire framework/language, because it's part of everything you do - death by a thousand cuts. Nothing you do can improve upon the fundamental fact of running in an interpreter, with a garbage collector, with complex layout calculations (a la HTML/CSS instead of Win32), or with major choices like processes over threads, sync IO over async IO.
Well, there is a step beyond framework hell that can work, which is "living inside your own black box"[0]. This strategy intentionally supersedes the lower-level abstraction layers with a higher-level, application-focused one that eases rewriting the underlying stack "sometime down the road". It's nearly the only way you can get that.
But it does require a good understanding of what the application is and does, and a lot of software isn't that: it's just more stuff that has a behavior when you click around and press keys.
Actually I suspect unnecessary use of async IO is what makes many Rust applications slow. It surely makes things slower to compile (+100s of crates dependencies for the Tokio ecosystem), it makes the binaries bigger, which in turn makes the application slower to cold start and download.
GIMP will stall for 10-15 seconds at startup looking for XSANE plugins. Apparently it's calling some external server, bad in itself, and that external server is slow. Worse, this delay stalls out the entire GUI for both GIMP and other programs.
There's no excuse for this "phoning home". Especially for XSANE, which is a rather bad scanner interface.
> GIMP will stall for 10-15 seconds at startup looking for XSANE plugins. Apparently it's calling some external server, bad in itself, and that external server is slow. Worse, this delay stalls out the entire GUI for both GIMP and other programs.
Do you remember where you came across that explanation?
I'd be very surprised if it weren't something like an mDNS query with a high timeout. Which is it's own problem (ideally it'd be async), but a far cry from it trying to access something on the internet.
[+] [-] dahwolf|2 years ago|reply
We are very, very bad at what we do, yet somehow get richly rewarded for it.
We've even invented a new performance problem: intermittent performance. Performance isn't just poor, it's also extremely variable due to distributed computing, lamba, whichever. So users can't even learn the performance pattern.
Where chip designers move heaven and earth the move compute and data as closely together as is physically possible, leave it to us geniuses to tear them apart as far as we can. Also, leave it to us to completely ignore parallel computing so that your 16 cores are doing fuck all.
You may now comment on why our practices are fully justified.
[+] [-] jtriangle|2 years ago|reply
10x every project timeline and it's fixed, simple as.
Granted, the big downside is, you have to keep your talent motivated and on-task 10x as long, that's like turning a quarter horse into a plough horse, it's not likely to happen quickly, if at all. You'd really need to start over with the kids who are in highschool now writing calculator apps in python by making them re-write them in C and grade them on how few lines they use.
ie, it's a pipe dream, and will continue to be until we run out of hardware capability, which has been "soon" for the last 30 years, so don't hold your breath.
[+] [-] letsdothisagain|2 years ago|reply
Labour productivity vs wages doubled since the 1970s and the trend seems to be continuing. It was about 150% in 2000 so we can use Excel as as the benchmark.
This means that an accountant today can wait for Excel to load for 2 whole hours of their 8 hour shift, and still be as productive as an accountant from 20 years ago!
Isn't that amazing! Technology is so cool, and our metrics for defining economic success are incredible.
[+] [-] Xeoncross|2 years ago|reply
It doesn't matter that it would be better for everyone and the planet. It's a prisoners dilemma. I only get rewarded and promoted for shipping stuff and meeting deadlines even if the products are slow.
Thank God for open source. Programmers produce amazing libraries, frameworks, languages and systems when business demands and salary is out of the picture.
[+] [-] didgetmaster|2 years ago|reply
I mistakenly thought that people might flock to it once I was able to demonstrate that it was significantly faster than other systems. I couldn't have been more wrong. Even database queries that were several times faster than other systems got a big yawn by people who saw it demonstrated. No one seemed interested in why it was so fast.
But people will jump on the latest 'fashionable' application, platform, or framework often no matter how slow and inefficient it is.
[+] [-] titzer|2 years ago|reply
[+] [-] lelanthran|2 years ago|reply
Too many people are arguing that things like Slack and Linen are performing as well as could be expected due to the functionality they are providing.
[+] [-] chaostheory|2 years ago|reply
Most of them have never experienced the level of instability and insecurity of past computing either. Those improvements aren’t free. In the past, particularly with Windows (since this is what the videos recorded) it would be normal for my computer to freeze or crash every other hour. It happens much less today
[+] [-] gilbetron|2 years ago|reply
I can grep through a gigabyte text file in seconds. I couldn't even do that at all back in 2000.
[+] [-] ssss11|2 years ago|reply
[+] [-] sharikous|2 years ago|reply
In the days of programs taking forever to load or web pages downloading images slowly we knew what technical limitations were there.
Now we know how much sluggishness comes from cruft, downright hostility (tracking), sloppy developer work, etc.. we know this is a human problem and that angers us.
But for non-technical people it hasn't changed too much. Computing wasn't super pleasant 25 years ago and it's not now. Instead of waiting half a minute for Word to load they wait 4-5 seconds for Spotify to load. They were never interested in Notepad or Cmd or Paint. It doesn't bother them that now they open slower than in 1999.
[+] [-] avidphantasm|2 years ago|reply
Another quibble I have with the article is the statement that the visual effects on macOS were smoothly animated from the start. This is not so. Compositing performance on Mac OS X was pretty lousy at first and only got significantly better when Quartz Extreme was released with 10.2 (Jaguar). Even then, Windows was much more responsive than Mac OS X. You could argue that graphics performance on macOS didn't get really good until the release of Metal.
Nowadays, I agree, Windows performance is not great, and macOS is quite snappy, at least on Apple Silicon. I too hope that these performance improvements aren't lost with new bloat.
[+] [-] torginus|2 years ago|reply
The comparison is using OS X 10.6 - I used to daily drive it and it was pretty snappy on my machine - which is corroborated by this guy's Twitter video capture.
As for Windows performance - Notepad/File Explorer might be slower than strictly necessary, but one of the advantages of Windows' backwards compatibility, is that I keep using the same stuff - Total Commander and Notepad++, that I used from the dawn of time, and those things haven't gotten slow (or haven't changed lol).
[+] [-] gizmo|2 years ago|reply
I'm very happy with my M2 Mac. It's a giant leap forward in performance and battery life. But Electron is a giant leap backward of similar magnitude.
[+] [-] layer8|2 years ago|reply
[+] [-] wsinks|2 years ago|reply
[+] [-] neop1x|2 years ago|reply
[+] [-] gavinhoward|2 years ago|reply
I run a machine with lots of RAM and a hefty CPU and a monster GPU and an SSD for my Linux install and...a pair of HDD's with ZFS managing them as a mirror.
Wat. [1]
I also have Windows on a separate drive...that is also an HDD.
Double wat.
My Linux is snappy, but I also run the most minimal of things: I run the Awesome Window Manager with no animations. I run OpenRC with minimal services. I run most of my stuff in the terminal, and I'm getting upset at how slow Neovim is getting.
But my own stuff, I run off of ZFS on hard drives, and I'll do it in VM's with constrained resources.
Why?
While my own desktop has been optimized, I want my software to be snappy everywhere, even on lesser machines, even on phones. Even on Windows with an HDD.
This is my promise: my software will be far faster than what companies produce.
Because speed will be my killer feature. [2]
[1]: https://www.destroyallsoftware.com/talks/wat
[2]: https://bdickason.com/posts/speed-is-the-killer-feature/
[+] [-] jmmv|2 years ago|reply
And, IMHO, that's the way it should be: I think it's insane(?) to give developers top-of-the-line hardware, because such hardware is not representative of the user population... and that's part of why I stick to older hardware for longer than others would say is reasonable.
[+] [-] Karrot_Kream|2 years ago|reply
Don't get me wrong, All of my servers at home run Void Linux and use runit. Pretty much anything that runs on them is snappy and they run on 10 year old hardware but still sing because I use software written in Go or native languages. But remembering the particulars about runit services and symlinks is something I forget every 3 months between deploying new services. Trying to troubleshoot the logger is also a fun one where I remember then forget a few months later. Using systemd, this all just comes for free. Maybe I should write all of this down but I'm doing this for fun aren't I?
The reason users don't care that much about slow software is because they use software primarily to get things done.
[+] [-] gumby|2 years ago|reply
I think there’s an important additional factor, which is how dynamic so much UI is these days. So much is looked up at runtime, rather than being determined at compile time or at least at some static time. That means you can plug a second monitor into your laptop and everything will “just work”. But there is no reason it should take a long time to start system settings (an example from the article) as the set of settings widgets doesn’t change much — for many people, never — and so can be cached either the first time you start it or precached by a background process. Likewise a number of display-specific decisions can be made, at least for the laptop’s screen or phone’s screen, and frozen.
Here’s some sobering perspective on this from 40 years ago: https://www.folklore.org/StoryView.py?story=Saving_Lives.txt
[+] [-] bee_rider|2 years ago|reply
This is not an illusion. Cross-platform programs suck, so everyone avoids them, right? Electron apps and whatnot are universally mocked. You would only use one for an online services like Spotify or something. The normal use case is downloading some nice native code from your repo.
[+] [-] louismerlin|2 years ago|reply
We should be getting so much more from our hardware! Let's not settle for software that makes us feel bad.
[1] http://www.permacomputing.net/
[2] https://suckless.org/
[+] [-] mike_hock|2 years ago|reply
Mind you, the software we had 20 years ago was fully featured, smaller and faster than what we have today (orders of magnitude faster when adjusted for the increase in hardware speed).
Suckless, on the other hand, is impractical esthetic minimalism that removes "bloat" by removing the program. I'd rather run real software than an art project.
If you want more from your hardware, the answer is neither the usual bloatware, nor Suckless crippleware.
[+] [-] Karrot_Kream|2 years ago|reply
I love exploring low-latency code but rather than trying to create small, sharp tools the suckless way, I like to create low latency experiences end-to-end. Thinking about rendering latency, GUI concurrency, interrupt handling, etc. Suckless tools prioritize the functionality and the simplicity of code over the actual experience of using the tool. One of my favorite things to do is create offline-first views (which store things in browser Local Storage) of sites like HN that paper over issues with network latency or constrained bandwidth leading to retries.
I find suckless and permacomputing to be the siren song of a type of programmer, the type of programmer who shows up to give a presentation and then has to spend 10 minutes getting their lean Linux distro to render a window onto an external screen at the correct DPI, or even to connect to the wifi using some wpa_supplicant incantations.
[+] [-] samstave|2 years ago|reply
WTF happened to all the THOUSANDS AND THOUSANDS of machines I deployed to datacenters over the decades? Where are the F5 load balancers I spent $40,000 on per box in 1999?
I know that when we did Lucas' presidio migration, tens of million$ of SGI boxes went to the ripper. That sucks.
edit:
All these machines could be used to house a 'slow internet' or 'limited interenet'
Imagine when we graduate our wealth gap to include the information gap - where the poor only have access to an internet indexed to September 2021 - oh wait...
But really - that is WHAT AI will bring: an information gap: only the wealthy companies will have real-time access to information on the internet - and all the poor will have outdated information that will have already have been mined for its value.
think of how HFT network cards with insane packet buffers, and private fiber lines gave hedgies the microsecond advantages on trading stocks...
Thats basically what AI will power - the hyper-accelerated exploitation of information on the web via AI - but the common man, will be releagated to obsolete AI, while the @sama people of the world build off-planet bunkers.
[+] [-] lofatdairy|2 years ago|reply
Also, OpenBSD's philosophy is very similar to Suckless. One of the more notable projects that come to mind is the `doas` replacement for `sudo`.
[^1]: This is based on Dan Luu's testing (https://danluu.com/term-latency/). I don't know when this testing was done but I assume a few years ago because I remember finding it before.
[+] [-] systems_glitch|2 years ago|reply
I basically only upgrade workstations due to web browsers needs, and occasionally because a really big KiCAD project brings a system to a crawl. At this point even automated test suite runtimes are more improved by fixing things that stop test parallelization from working efficiently vs. bigger hardware.
[+] [-] avgcorrection|2 years ago|reply
[+] [-] taeric|2 years ago|reply
My initial gut was to blame the modern drawing primitives. I know that a lot of the old occlusion based ideas were somewhat cumbersome on the application, but they also made a lot of sense to scope down all of the work that an app had to do?
That said, seeing Notepad makes me think it is not the modern drawing primitives, but the modern application frameworks? Would be nice to see a trace of what all is happening in the first few seconds of starting these applications. My current imagination is that it is something akin to a full classpath scan of the system to find plugins that the application framework supported, but that all too many applications don't even use.
That is, used to, writing an application started with a "main" and you did everything to setup the window and what you wanted to show. Nowadays, you are as likely to have your main be offloaded to some logic that your framework provided, with you providing a ton of callbacks/entrypoints for the framework to come back to.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] qsantos|2 years ago|reply
Related to that, it might not actually be the case: https://news.ycombinator.com/item?id=36495667. Key takeaway:
> Rumor 1: Rust takes more than 6 months to learn – Debunked !
> All survey participants are professional software developers (or a related field), employed at Google. While some of them had prior Rust experience (about 13%), most of them are coming from C/C++, Python, Java, Go, or Dart.
> Based on our studies, more than 2/3 of respondents are confident in contributing to a Rust codebase within two months or less when learning Rust. Further, a third of respondents become as productive using Rust as other languages in two months or less. Within four months, that number increased to over 50%. Anecdotally, these ramp-up numbers are in line with the time we’ve seen for developers to adopt other languages, both inside and outside of Google.
> Overall, we’ve seen no data to indicate that there is any productivity penalty for Rust relative to any other language these developers previously used at Google. This is supported by the students who take the Comprehensive Rust class: the questions asked on the second and third day show that experienced software developers can become comfortable with Rust in a very short time.
[+] [-] jacknews|2 years ago|reply
Why oh why won't Apple let you turn this off? It annoys if not nauseates so many people.
This is is just one example, but indicative of their mindset, why I dislike Apple, and use linux whenever I can. It's such a calming joy to switch instantly between desktops without the rushing-train effect.
[+] [-] hinkley|2 years ago|reply
I think we could all do with celebrating the small. ESP32 and STM32 have hit a point where you can do modest computing tasks on them without having to become an embedded hardware expert to do so. I'm at one of those crossroads in my career and I'm trying to decide if I double down on a new web friendly programming language (maybe Elixir) or jump into embedded.
I've done a reasonable amount of programming in the small, several times tricked into it, and while it's as challenging if not moreso in the middle of doing the work, the nostalgia factor after the fact is much higher than most of the other things I've done.
[+] [-] jeffbee|2 years ago|reply
[+] [-] mrkeen|2 years ago|reply
Standups, one-on-ones, team fika, NIH, retros, agile retros, incident post mortems, cross-team fika, town halls, pool/billiards, table-tennis, and testing-in-prod.
[+] [-] mmazing|2 years ago|reply
The architect wanted to break everything up into extremely small pieces, small enough pieces that many were dependent on each other for every single call. Think "a unified address service" to provide an physical address via an identifier for anything that had an address (customers, businesses, delivery locations, etc).
The problem was that it turns out when you're looking up a customer or business, you always need an address, so the customer service needed to hit the address service every time.
Disregarding the fact that this whole thing was a stupid design, the plan was that when you hit the customer api, the customer code would make internal http calls to the address service, etc.
I pointed out that this was a ton of unnecessary network overhead, when all of this information was sitting in a single database.
The whole team's argument was effectively - "it's 2015, computers and networks are fast now, we don't need to worry about efficiency, we'll just throw more hardware at it".
The whole thing ended up being scrapped, because it was crippled by performance issues. I ended up rewriting the whole thing as a "macroservice" which was 60000% faster for some particularly critical backend processes.
Anyway ... I think that mentality is prevalent in a lot of people involved in technology creation, technology has improved so much, moore's law etc etc etc.
So let's not worry about how much memory this thing takes, or how much disk space this uses, or how much processing power this takes, or how many network calls. Don't worry about optimization, it's no big deal, look at how fast everything is now.
[+] [-] hammycheesy|2 years ago|reply
I worked on the Shell team until late 2022. There is very little C#, if any at all. The vast majority of the Windows Shell is still C++ with a significant amount of WinRT/COM.
[+] [-] torginus|2 years ago|reply
[+] [-] jmmv|2 years ago|reply
What I had in mind when I wrote this sentence though is how the modern apps that people seem to like /are/ C#, such as Windows Terminal and PowerToys, and these feel quite slow to me. But, yeah, calling those the shell is a stretch.
[+] [-] jmmv|2 years ago|reply
Discussion from a few days ago based on the original Twitter thread: https://news.ycombinator.com/item?id=36446933
[+] [-] DeathArrow|2 years ago|reply
[+] [-] isaacfrond|2 years ago|reply
[+] [-] pradn|2 years ago|reply
But after that, the biggest problem is clearly the framework/language you use. The maxim "premature optimization is the root of all evil" has done damage here. The problem with frameworks/languages is that by the time you finish your features and profile your code, you're already doomed. There's no way to speed up the entire framework/language, because it's part of everything you do - death by a thousand cuts. Nothing you do can improve upon the fundamental fact of running in an interpreter, with a garbage collector, with complex layout calculations (a la HTML/CSS instead of Win32), or with major choices like processes over threads, sync IO over async IO.
[+] [-] syntheweave|2 years ago|reply
But it does require a good understanding of what the application is and does, and a lot of software isn't that: it's just more stuff that has a behavior when you click around and press keys.
[0] https://prog21.dadgum.com/66.html
[+] [-] ruuda|2 years ago|reply
[+] [-] Nimitz14|2 years ago|reply
[+] [-] Animats|2 years ago|reply
GIMP will stall for 10-15 seconds at startup looking for XSANE plugins. Apparently it's calling some external server, bad in itself, and that external server is slow. Worse, this delay stalls out the entire GUI for both GIMP and other programs.
There's no excuse for this "phoning home". Especially for XSANE, which is a rather bad scanner interface.
[+] [-] adamnew123456|2 years ago|reply
Do you remember where you came across that explanation?
I'd be very surprised if it weren't something like an mDNS query with a high timeout. Which is it's own problem (ideally it'd be async), but a far cry from it trying to access something on the internet.
[+] [-] retrocryptid|2 years ago|reply
1. Serve ads.
2. Consume content.
3. Compress audio and video input and upload it to someone's servers.
None of these requires a responsive UI. Ticket closed, works as designed.