Kinda funny but I think LLM-assisted workflows are frequently slow -- that is, if I use the "refactor" features in my IDE it is done in a second, if I ask the faster kind of assistant it comes back in 30 seconds, if I ask the "agentic" kind of assistant it comes back in 15 minutes.
I asked an agent to write an http endpoint at the end of the work day when I had just 30 min left -- my first thought was "it took 10 minutes to do what would have taken a day", but then I thought, "maybe it was 20 minutes for 4 hours worth of work". The next day I looked at it and found the logic was convoluted, it tried to write good error handling but didn't succeed. I went back and forth and ultimately wound up recoding a lot of stuff manually. In 5 hours I had it done for real, certainly with a better test suite than I would have written on my own and probably better error handling.
As a counter example (re: agents), I routinely delegate simple tasks to Claude Code and get near-perfect results. But I've also had experiences like yours where I ended up wasting more time than saved. I just kept trying with different types of tasks, and narrowed it down to the point where I have a good intuition for what works and what doesn't. The benefit is I can fire off a request on my phone, stick it in my pocket, then do a code review some time later. This process is very low mental overhead for me, so it's a big productivity win.
I've already written about this several times here. I think the current trend of LLMs chasing benchmark scores are going in the wrong direction at least as programming tools. In my experience they get it wrong with enough probability, so I always need to check the work. So I end up in a back and forth with the LLM and because of the slow responses it becomes a really painful process and I could often have done the task faster if I sat down and thought about it. What I want is an agent that responds immediately (and I mean in subseconds) even if some benchmark score is 60% instead of 80%.
The only thing I've found that LLM speeds up my work is a sort of advanced find replace.
A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
It has been pretty decent at these types of changes and saves time of poking though and finding all the places I would have updated manually in a way that find/replace never could. Though I've never tried this on a huge code base.
I guess it depends? The "refactor" stuff, if your IDE or language server can handle it, then yeah I find the LLM slower for sure. But there are other cases than an LLM helps a lot.
I was writing some URL canonicalization logic yesterday. Because we rolled this out as an MVP, customers put URLs in all sorts of ways and we stored it into the DB. My initial pass at the logic failed on some cases. Luckily URL canonicalization is pretty trivially testable. So I took the most used customers from our DB, send them to Claude and told Claude to come up with the "minimum spanning test cases" that cover this behavior. This took maybe 5-10 sec. I then told Zed's agent mode using Opus to make me a test file and use these test cases to call my function. I audited the test cases and ended up removing some silly ones. I iterated on my logic and that was that. Definitely faster than having to do this myself.
All the references to LLMs in the article seemed out-of-place like poorly done product placement.
LLMs are the anti-thesis of fast. In fact, being slow is a perceived virtue with LLM output. Some sites like Google and Quora (until recently) simulate the slow typed output effect for their pre-cached LLM answers, just for credibility.
I switch to vs code from cursor many times a day just to use their python refactoring feature. The pylance server that comes with cursor doesn't support refactoring.
Early in my career as a software engineer, I developed a reputation for speeding things up. This was back in the day where algorithm knowledge was just as important as the ability to examine the output of a compiler, every new Intel processor was met with a ton of anticipation, and Carmak and Abrash were rapidly becoming famous.
Anyway, the 22 year old me unexpectedly gets invited to a customer meeting with a large multinational. I go there not knowing what to expect. Turns out, they were not happy with the speed of our product.
Their VP of whatever said, quoting: "every saved second here adds $1M to our yearly profit". I was absolutely floored. Prior to that moment I couldn't even dream of someone placing a dollar amount on speed, and so directly. Now 20+ years later it still counts as one of the top 5 highlights of my career.
P.S. Mentioning as a reaction to the first sentence in the blog post. But the author is correct when she states that this happens rarely.
P.P.S. There was another engineer in the room, who had the nerve to jokingly ask the VP: "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?". They didn't laugh, although I thought it was quite funny. Hey, Doug! :)
> "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?"
I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M.
Working with a task scheduling system, we were told that every minute a airplane is delayed costs $10k. This was back in the 90s, so adjust accordingly.
They don't explicitly ask for it, but they won't take you seriously if you don't at least pretend to be. "Fast" is assumed. Imagine if Rust had shown up, identical in every other way, but said "However, it is slower than Ruby". Nobody would have given it the time of day. The only reason it was able to gain attention was because it claimed to be "Faster than C++".
Watch HN for a while and you'll start to notice that "fast" is the only feature that is necessary to win over mindshare. It is like moths to a flame as soon as something says it is faster than what came before it.
Only in the small subset of programmers that post on HN is that the case. Most users or even most developers don't mind slow stuff or "getting into flow state" or anything like that, they just want a nice UI. I've seen professional data scientists using Github Desktop on Windows instead of just learning to type git commands for an easy 10x time save
You're taking the wrong conclusion, "Fast" is a winning differentiator only when you offer the same feature-set, but faster.
Your example says it, people will go, this is like X (meaning it does/has the same features as X), but faster. And now people will flock from X to your X+faster thing.
Which tells us nothing about if people would also move to a X+more-features, or a X+nicer-ux, or a X+cheaper, etc., without them being any faster than X or even possibly slower.
Maybe for languages, but fast is easily left behind when looking for frameworks. People want features, people want compatibility, people will use electron all over.
It's well known, but this video[1] is a proof of concept demonstration from 4 years ago, Casey Muratori called out Microsoft's new Windows Terminal for slow performance and people argued that it wasn't possible, practical, or maintainable to make a faster terminal and that his claims of "thousands of frames per second" were hyperbolic, and one person said it would be a "PHD level research project".
In response, Casey spent <1 week making RefTerm, a skeleton proto-terminal with the same constraints Microsoft people had - using Windows APIs for things, using DirectDraw with GPU rendering, handling terminal escape codes, colours, blinking, custom fonts, missing font character fallback, line wrap, scrollback, Unicode and Right-to-Left Arabic combining characters, etc. RefTerm had 10x faster throughput than Windows Terminal and ran at 6-7000 frames per second. It was single-threaded, not profiled, not tuned, no advanced algorithms, no-cheating by sending some data to /dev/null, all it had to speed it up was simple code without tons of abstractions and a Least Recently Used (LRU) glyph cache to avoid re-rendering common characters, written the first way that he thought of. Around that time he did a video series on that YouTube channel about optimization and arguing that even talking about 'optimization' was too hopeful, we should be talking about 'not-pessimization', that most software is not slow because it has unavoidable complexity and abstractions needed to help maintenance, it's slow because it's choked by a big pile of do-nothing code and abstraction layers added for ideological reasons which hurt maintenance as well as performance.
This video[2] is another specific details one, Jason Booth talking about his experience of game development, and practical examples of changing data layout and C++ code to make it do less work, be more cache friendly, have better memory access patterns, and run orders of magnitude faster without adding much complexity and sometimes removing complexity.
Just want to say how much I thank YCom for not f'ing up the HN interface, and keeping it fast.
I distinctly remember when Slashdot committed suicide. They had an interface that was very easy for me to scan and find high value comments, and in the name of "modern UI" or some other nonsense needed to keep a few designers employed, completely revamped it so that it had a ton of whitespace and made it basically impossible for me to skim the comments.
I think I tried it for about 3 days before I gave up, and I was a daily Slashdot reader before then.
This is interesting. It got me to think. I like it when articles provoke me to think a bit more on a subject.
I have found this true for myself as well. I changed back over to Go from Rust mostly for the iteration speed benefits. I would replace "fast" with "quick", however. It isn't so much I think about raw throughput as much as "perceived speed". That is why things like input latency matter in editors, etc. If something "feels fast" (ie Go compiles), we often don't even feel the need to measure. Likewise, when things "feel slow" (ie Java startup), we just don't enjoy using them, even if in some ways they actually are fast (like Java throughput).
I feel the same way about Go vs Rust. Compilation speed matters. Also, Rust projects resemble JavaScript projects in that they pull in a million deps. Go projects tend to be much less dependency happy.
This is all well and good that we developers have opinions on whether Go compiles faster than Rust or whatever, but the real question is: which is faster for your users?
Fast is also cheap. Especially in the world of cloud computing where you pay by the second. The only way I could create a profitable transcription service [1] that undercuts the rest was by optimizing every little thing along the way. For instance, just yesterday I learned that the image size I've put together is 2.5× smaller than the next open source variant. That means faster cold boots, which reduces the cost (and providers a better service).
I've noticed over and over again at various jobs that people underestimate the benefit of speed, because they imagine doing the same workflow faster rather than doing a different workflow.
For example, if you're running experiments in one big batch overnight, making that faster doesn't seem very helpful. But with a big enough improvement, you can now run several batches of experiments during the day, which is much more productive.
I think people also vastly underestimate the cost of context switching. They look at a command that takes 30 seconds and say "what's the point of making it take 3 seconds? you only run it 10 times in a day; it's only 5 minutes". But the cost is definitely way more than that.
Telegram is pretty slow, both the web interface and the Android app. For example, reactions to a message always take a long time to load (both when leaving one, and when looking at one). Just give me emoji, I don't need your animated emoji!
> Rarely in software does anyone ask for “fast.” We ask for features, we ask for volume discounts, we ask for the next data integration. We never think to ask for fast.
Almost everywhere I’ve worked, user-facing speed has been one of the highest priorities. From the smallest boring startup to the multi billion dollar company.
At companies that had us target metrics, speed and latency was always a metric.
I don’t think my experience has been all that unique. In fact, I’d be more surprised if I joined a company and they didn’t care about how responsive the product felt, how fast the pages loaded, and any weird lags that popped up.
At 6 out of 8 companies I've worked at (mostly a mixture of tech & finance) I have always had to fight to get any time allotted for performance optimization, to the point where I would usually just do it myself under the radar. Even at companies that measured latency and claimed it was important, it would usually take a backseat to adding more features.
My experience has been that people sometimes obsess over speed for things like how fast a search result returns but not over things like how fast a page renders or how many bites we send the user.
I find most jobs I had fast becomes a big issue once things are too slow. Or expensive.
It's a retroactively fixed thing. Like imagine forgetting to make a UI, shipping just an API to a customer then thinking "oh shit, they need a UI they are not programmers". And only noticing from customer complaints. That is how performance is often treated.
This is probably because performance problems usually require load or unusual traffic patterns, which require sales, which require demos, which dont require performance tuning as there is one user!
If you want to speed your web service up first thing is invest time, maybe money in really good observability. Should be easy for anyone in the team to find a log, see what CPU is at etc. Then set up proxy metrics around speed you care about and talk about them every week and take actions.
Proxy metrics means you likely cant (well probably should not) check the speed that Harold can sum his spreadsheet every minute, but you can check the latency of the major calls involved. If something is slow but metrics look good then profiling might be needed.
Sometimes there is an easy speed up. Sometimes you need a new architecture! But at least you know what's happening.
Website is superfast. Reason I usually go for the comments first on HN is exactly this: they're fast. THIS is notably different.
On interfaces:
It's not only the slowness of the software or machine we have to wait for, it's also the act of moving your limb that adds a delay. Navigating a button (mouse) adds more friction than having a shortcut (keyboard). It's a needless feedback loop. If you master your tool all menus should go away. People who live in the terminal know this.
As a personal anecdote, I use custom rofi menus (think raycast for Linux) extensively for all kinds of interaction with data or file system (starting scripts, opening notes, renaming/moving files). It's notable how your interaction changes if you remove friction.
Venerable tools in this vein: vim, i3, kitty (former tmux), ranger (on the brim), qutebrowser, visidata, nsxiv, sioyek, mpv...
Essence of these tools is always this: move fast, select fast and efficiently, ability to launch your tool/script/function seamlessly. Be able to do it blindly. Prefer peripheral feedback.
I wish more people saw what could be and built more bicycles for the mind.
No mention of google search itself being fast. It's one of the poster children of speed being part of the interface.
Microsoft needs to take heed, for example Explorer's search, Teams, make your computer seem extremely slow. VS Code on the other hand is fast enough, while slower than native editors such as Sublime Text.
Only sorta related, but it’s crazy that to me how much our standards have dropped for speed/responsiveness in some areas.
I used to play games on N64 with three friends. I didn’t even have a concept of input lag back then. Control inputs were just instantly respected by the game.
Meanwhile today, if I want to play rocket league with three friends on my Xbox series S (the latest gen, but the less powerful version), I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable.
I was going to say one of the more recent times fast software excited me was with `uv` for Python packaging, and then I saw that op had a link to Charlie Marsh in the footnote. :)
I once accidentally blocked TCP on my laptop and found out "google.com" runs on UDP, it was a nice surprise.
baba is fast.
I sometimes get calls like "You used to manage a server 6 years ago and we have an issue now" so I always tell the other person "type 'alias' and read me the output", this is how I can tell if this is really a server I used to work on.
They don't require you to use QUIC to access Google, but it is one of the options. If you use a non-supporting browser (Safari prior to 2023, unless you enabled it), you'd access it with a standard TCP-based HTTP connection.
> this is how I can tell if this is really a server I used to work on
Hm, shell environment is fairly high on the list of things I'd expect the next person to change, even assuming no operational or functional changes to a server.
And of course they'd be using a different user account anyway.
The industry (hint:this forum's readers) have replaced "fast" software with "portable" meaning
-universally addressable libraries that must load from discrete and often remote sources,
-zero hang time in programming language evolution (leaving no time for experts to discover, document, and implement optimizations)
-insistence on "the latest version" focused software with no emphasis on long term code stability
For what is worth I built myself a custom jira board last month, so I could instantly search, filter and group tickets (by title, status, assignee, version, ...)
Motivation: Running queries and finding tickets on JIRA kills me sometimes.
The board is not perfect, but works fast and I made it superlightweight. In case anybody wants to give it a try:
Don't dare to try on mobile, use desktop. Unfortunately it uses a proxy and requires an API key, but doesn't store anything in backend (just proxies the request because of CORS). Maybe there is an API or a way to query jira cloud instance directly from browser, I just tried first approach and moved on. It even crossed my mind to add it somehow to Jira marketplace...
Anyway, caches stuff locally and refreshes often. Filtering uses several tricks to feel instant.
UI can be improved, but uses a minimalistic interface on purpose, like HN.
If anybody tries it, I'll be glad to hear your thoughts.
[+] [-] PaulHoule|7 months ago|reply
I asked an agent to write an http endpoint at the end of the work day when I had just 30 min left -- my first thought was "it took 10 minutes to do what would have taken a day", but then I thought, "maybe it was 20 minutes for 4 hours worth of work". The next day I looked at it and found the logic was convoluted, it tried to write good error handling but didn't succeed. I went back and forth and ultimately wound up recoding a lot of stuff manually. In 5 hours I had it done for real, certainly with a better test suite than I would have written on my own and probably better error handling.
See https://www.reddit.com/r/programming/comments/1lxh8ip/study_...
[+] [-] resonious|7 months ago|reply
[+] [-] cycomanic|7 months ago|reply
[+] [-] citizenpaul|7 months ago|reply
A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
It has been pretty decent at these types of changes and saves time of poking though and finding all the places I would have updated manually in a way that find/replace never could. Though I've never tried this on a huge code base.
[+] [-] Karrot_Kream|7 months ago|reply
I was writing some URL canonicalization logic yesterday. Because we rolled this out as an MVP, customers put URLs in all sorts of ways and we stored it into the DB. My initial pass at the logic failed on some cases. Luckily URL canonicalization is pretty trivially testable. So I took the most used customers from our DB, send them to Claude and told Claude to come up with the "minimum spanning test cases" that cover this behavior. This took maybe 5-10 sec. I then told Zed's agent mode using Opus to make me a test file and use these test cases to call my function. I audited the test cases and ended up removing some silly ones. I iterated on my logic and that was that. Definitely faster than having to do this myself.
[+] [-] roncesvalles|7 months ago|reply
LLMs are the anti-thesis of fast. In fact, being slow is a perceived virtue with LLM output. Some sites like Google and Quora (until recently) simulate the slow typed output effect for their pre-cached LLM answers, just for credibility.
[+] [-] tomrod|7 months ago|reply
As much as I like agents, I am not convinced the human using them can sit back and get lazy quite yet!
[+] [-] cornfieldlabs|7 months ago|reply
[+] [-] pjmlp|7 months ago|reply
[+] [-] old-gregg|7 months ago|reply
Early in my career as a software engineer, I developed a reputation for speeding things up. This was back in the day where algorithm knowledge was just as important as the ability to examine the output of a compiler, every new Intel processor was met with a ton of anticipation, and Carmak and Abrash were rapidly becoming famous.
Anyway, the 22 year old me unexpectedly gets invited to a customer meeting with a large multinational. I go there not knowing what to expect. Turns out, they were not happy with the speed of our product.
Their VP of whatever said, quoting: "every saved second here adds $1M to our yearly profit". I was absolutely floored. Prior to that moment I couldn't even dream of someone placing a dollar amount on speed, and so directly. Now 20+ years later it still counts as one of the top 5 highlights of my career.
P.S. Mentioning as a reaction to the first sentence in the blog post. But the author is correct when she states that this happens rarely.
P.P.S. There was another engineer in the room, who had the nerve to jokingly ask the VP: "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?". They didn't laugh, although I thought it was quite funny. Hey, Doug! :)
[+] [-] adwn|7 months ago|reply
I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M.
[+] [-] emmelaich|7 months ago|reply
[+] [-] unknown|7 months ago|reply
[deleted]
[+] [-] felideon|7 months ago|reply
[+] [-] ensemblehq|7 months ago|reply
[+] [-] asimovDev|7 months ago|reply
[+] [-] ctenb|7 months ago|reply
[+] [-] 9rx|7 months ago|reply
They don't explicitly ask for it, but they won't take you seriously if you don't at least pretend to be. "Fast" is assumed. Imagine if Rust had shown up, identical in every other way, but said "However, it is slower than Ruby". Nobody would have given it the time of day. The only reason it was able to gain attention was because it claimed to be "Faster than C++".
Watch HN for a while and you'll start to notice that "fast" is the only feature that is necessary to win over mindshare. It is like moths to a flame as soon as something says it is faster than what came before it.
[+] [-] mvieira38|7 months ago|reply
[+] [-] didibus|7 months ago|reply
Your example says it, people will go, this is like X (meaning it does/has the same features as X), but faster. And now people will flock from X to your X+faster thing.
Which tells us nothing about if people would also move to a X+more-features, or a X+nicer-ux, or a X+cheaper, etc., without them being any faster than X or even possibly slower.
[+] [-] Dylan16807|7 months ago|reply
[+] [-] jodrellblank|7 months ago|reply
In response, Casey spent <1 week making RefTerm, a skeleton proto-terminal with the same constraints Microsoft people had - using Windows APIs for things, using DirectDraw with GPU rendering, handling terminal escape codes, colours, blinking, custom fonts, missing font character fallback, line wrap, scrollback, Unicode and Right-to-Left Arabic combining characters, etc. RefTerm had 10x faster throughput than Windows Terminal and ran at 6-7000 frames per second. It was single-threaded, not profiled, not tuned, no advanced algorithms, no-cheating by sending some data to /dev/null, all it had to speed it up was simple code without tons of abstractions and a Least Recently Used (LRU) glyph cache to avoid re-rendering common characters, written the first way that he thought of. Around that time he did a video series on that YouTube channel about optimization and arguing that even talking about 'optimization' was too hopeful, we should be talking about 'not-pessimization', that most software is not slow because it has unavoidable complexity and abstractions needed to help maintenance, it's slow because it's choked by a big pile of do-nothing code and abstraction layers added for ideological reasons which hurt maintenance as well as performance.
[1] https://www.youtube.com/watch?v=hxM8QmyZXtg - "How fast should an unoptimized terminal run?"
This video[2] is another specific details one, Jason Booth talking about his experience of game development, and practical examples of changing data layout and C++ code to make it do less work, be more cache friendly, have better memory access patterns, and run orders of magnitude faster without adding much complexity and sometimes removing complexity.
[2] https://www.youtube.com/watch?v=NAVbI1HIzCE - "Practical Optimizations"
[+] [-] hn_throwaway_99|7 months ago|reply
I distinctly remember when Slashdot committed suicide. They had an interface that was very easy for me to scan and find high value comments, and in the name of "modern UI" or some other nonsense needed to keep a few designers employed, completely revamped it so that it had a ton of whitespace and made it basically impossible for me to skim the comments.
I think I tried it for about 3 days before I gave up, and I was a daily Slashdot reader before then.
[+] [-] nu11ptr|7 months ago|reply
I have found this true for myself as well. I changed back over to Go from Rust mostly for the iteration speed benefits. I would replace "fast" with "quick", however. It isn't so much I think about raw throughput as much as "perceived speed". That is why things like input latency matter in editors, etc. If something "feels fast" (ie Go compiles), we often don't even feel the need to measure. Likewise, when things "feel slow" (ie Java startup), we just don't enjoy using them, even if in some ways they actually are fast (like Java throughput).
[+] [-] christophilus|7 months ago|reply
[+] [-] asa400|7 months ago|reply
[+] [-] ilyakaminsky|7 months ago|reply
[1] https://speechischeap.com
[+] [-] SatvikBeri|7 months ago|reply
For example, if you're running experiments in one big batch overnight, making that faster doesn't seem very helpful. But with a big enough improvement, you can now run several batches of experiments during the day, which is much more productive.
[+] [-] IshKebab|7 months ago|reply
[+] [-] 01HNNWZ0MV43FF|7 months ago|reply
[+] [-] owlbite|7 months ago|reply
[+] [-] zavg|7 months ago|reply
He pays special attention to the speed of application. The Russian social network VK worked blazingly fast. The same is about Telegram.
I always noticed it but not many people verbalized it explicitly.
But I am pretty sure that people realize it subconsciously and it affects user behaviour metrics positively.
[+] [-] dominicq|7 months ago|reply
[+] [-] gishglish|7 months ago|reply
[deleted]
[+] [-] Aurornis|7 months ago|reply
Almost everywhere I’ve worked, user-facing speed has been one of the highest priorities. From the smallest boring startup to the multi billion dollar company.
At companies that had us target metrics, speed and latency was always a metric.
I don’t think my experience has been all that unique. In fact, I’d be more surprised if I joined a company and they didn’t care about how responsive the product felt, how fast the pages loaded, and any weird lags that popped up.
[+] [-] SatvikBeri|7 months ago|reply
[+] [-] codazoda|7 months ago|reply
[+] [-] saagarjha|7 months ago|reply
[+] [-] calibas|7 months ago|reply
First, efficient code is going to use less electricity, and thus, fewer resources will need to be consumed.
Second, efficient code means you don't need to be constantly upgrading your hardware.
[+] [-] bravesoul2|7 months ago|reply
It's a retroactively fixed thing. Like imagine forgetting to make a UI, shipping just an API to a customer then thinking "oh shit, they need a UI they are not programmers". And only noticing from customer complaints. That is how performance is often treated.
This is probably because performance problems usually require load or unusual traffic patterns, which require sales, which require demos, which dont require performance tuning as there is one user!
If you want to speed your web service up first thing is invest time, maybe money in really good observability. Should be easy for anyone in the team to find a log, see what CPU is at etc. Then set up proxy metrics around speed you care about and talk about them every week and take actions.
Proxy metrics means you likely cant (well probably should not) check the speed that Harold can sum his spreadsheet every minute, but you can check the latency of the major calls involved. If something is slow but metrics look good then profiling might be needed.
Sometimes there is an easy speed up. Sometimes you need a new architecture! But at least you know what's happening.
[+] [-] nakedneuron|7 months ago|reply
On interfaces:
It's not only the slowness of the software or machine we have to wait for, it's also the act of moving your limb that adds a delay. Navigating a button (mouse) adds more friction than having a shortcut (keyboard). It's a needless feedback loop. If you master your tool all menus should go away. People who live in the terminal know this.
As a personal anecdote, I use custom rofi menus (think raycast for Linux) extensively for all kinds of interaction with data or file system (starting scripts, opening notes, renaming/moving files). It's notable how your interaction changes if you remove friction.
Venerable tools in this vein: vim, i3, kitty (former tmux), ranger (on the brim), qutebrowser, visidata, nsxiv, sioyek, mpv...
Essence of these tools is always this: move fast, select fast and efficiently, ability to launch your tool/script/function seamlessly. Be able to do it blindly. Prefer peripheral feedback.
I wish more people saw what could be and built more bicycles for the mind.
[+] [-] kristianp|7 months ago|reply
Microsoft needs to take heed, for example Explorer's search, Teams, make your computer seem extremely slow. VS Code on the other hand is fast enough, while slower than native editors such as Sublime Text.
[+] [-] chamomeal|7 months ago|reply
I used to play games on N64 with three friends. I didn’t even have a concept of input lag back then. Control inputs were just instantly respected by the game.
Meanwhile today, if I want to play rocket league with three friends on my Xbox series S (the latest gen, but the less powerful version), I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable.
[+] [-] colton_padden|7 months ago|reply
[+] [-] doubleorseven|7 months ago|reply
baba is fast.
I sometimes get calls like "You used to manage a server 6 years ago and we have an issue now" so I always tell the other person "type 'alias' and read me the output", this is how I can tell if this is really a server I used to work on.
fast is my copilot.
[+] [-] Jtsummers|7 months ago|reply
https://en.wikipedia.org/wiki/HTTP/3
https://en.wikipedia.org/wiki/QUIC
They don't require you to use QUIC to access Google, but it is one of the options. If you use a non-supporting browser (Safari prior to 2023, unless you enabled it), you'd access it with a standard TCP-based HTTP connection.
[+] [-] quesera|7 months ago|reply
Hm, shell environment is fairly high on the list of things I'd expect the next person to change, even assuming no operational or functional changes to a server.
And of course they'd be using a different user account anyway.
[+] [-] esafak|7 months ago|reply
Speed is what made Google, which was a consumer product at the time. (I say this because it matters more in consumer products.)
[+] [-] chasing0entropy|7 months ago|reply
[+] [-] psanchez|7 months ago|reply
For what is worth I built myself a custom jira board last month, so I could instantly search, filter and group tickets (by title, status, assignee, version, ...)
Motivation: Running queries and finding tickets on JIRA kills me sometimes.
The board is not perfect, but works fast and I made it superlightweight. In case anybody wants to give it a try:
https://jetboard.pausanchez.com/
Don't dare to try on mobile, use desktop. Unfortunately it uses a proxy and requires an API key, but doesn't store anything in backend (just proxies the request because of CORS). Maybe there is an API or a way to query jira cloud instance directly from browser, I just tried first approach and moved on. It even crossed my mind to add it somehow to Jira marketplace...
Anyway, caches stuff locally and refreshes often. Filtering uses several tricks to feel instant.
UI can be improved, but uses a minimalistic interface on purpose, like HN.
If anybody tries it, I'll be glad to hear your thoughts.