Famously Steve Jobs said that the (personal) computer is "like a bicycle for the mind". It's a great metaphor because- besides the idea of lightness and freedom it communicates- it also described the computer as multiplier of the human strength- the bicycle allows one to travel faster and with much less effort, it's true, but ultimately the source of its power is still entirely in the muscles of the cyclist- you don't get out of it anything that you didn't put yourself.
Bu the feeling I'm having with LLMs is that we've entered the age of fossil-fuel engines: something that moves on its own power and produces somewhat more than the user needs to put into it. Ok, in the current version it might not go very far and needs to be pushed now and then, but the total energy output is greater than what users need to put in. We could call it a horse, except that this is artificial: it's a tractor. And in the last months I've been feeling like someone who spent years pushing a plough in the fields, and has suddenly received a tractor. A primitive model, still imperfect, but already working.
I think there is a legitimate fear that is born from what happened with Chess.
Humans could handily beat computers at chess for a long time.
Then a massive supercomputer beat the reigning champion, but didn't win the tournament.
Then that computer came back and won the tournament a year later.
A few years later humans are collaborating in-game with these master chess engines to multiply their strength, becoming the dominant force in the human/computer chess world.
A few years after that though, the computers start beating the human/computer hybrid opponents.
And not long after that, humans started making the computer perform worse if they had a hand in the match.
The next few years have probably the highest probability since the cold war of being extreme inflection points in the timeline of human history.
A tractor does exactly what you tell it to do though - you turn it on, steer it in a direction, and it goes. I like the horse metaphor for AI better: still useful, but sometimes unpredictable, and needs constant supervision.
It’s sort of interesting to look back at ~100 years of the automobile and, eg, the rise of new urbanism in this metaphor - there are undoubtedly benefits that have come from the automobile, and also the efforts to absolutely maximize where, how, and how often people use their automobile have led to a whole lot of unintended negative consequences.
Fossil-fuel cars a good analogy because, for all their raw power and capability, living in a polluted, car-dominated world sucks. The problem with modern AI has more to do with modernism than with AI.
Depends who you listen to. There are developers reporting significant gains from the use of AI, others saying that it doesn't really impact their work, and then there was some research saying that time savings due to the use of AI in developing software are only an illusion, because while developers were feeling more productive they were actually slower. I guess only time will tell who's right or if it is just a matter of using the tool in the right way.
> The lower-bound estimate represents 18 percent of the total reduction in man-hours in U.S. agriculture between 1944 and 1959; the upper-bound estimate, 27 percent
I prefer Doctorow's observation that they make us into reverse-centaurs [0]. We're not leading the LLM around like some faithful companion that doesn't always do what we want it to. We're the last-mile delivery driver of an algorithm running in a data-center that can't take responsibility for and ship the code to production on its own. We're the horse.
> You’ve go to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.
Why does HN love analogies? You can pick any animal or thing and it can fit in some way. Horse is a docile safe analogy it’s also the most obvious analogy. Like yes the world gets it LLMs have limitations thanks for sharing, we know it’s not as good as a programmer.
We should use analogies to point out the obvious thing everyone is avoiding:
Guys 3 years ago, AI wasn’t even a horse. It was a rock. The key is that it transformed into horse…. what will it be in the next 10 years?
AI is a terminator. A couple years back someone turned off read only mode. That’s the better analogy.
Pick an analogy that follows the trendline of continual change into the unknown future rather then an obvious analogy that keeps your ego and programming skills safe.
I suppose because they resemble the abstractions that make complex language possible. Another world full of aggressive posturing at tweet-length analogistic musings might have stifled some useful English parlance early.
But I reckon that we shouldn't have called it phishing because emails don't always smell.
If an analogy is an "obvious" analogy that makes it definitionally a good analogy, right? Either way: don't see why you gotta be so prescriptive about it one way or the other! You can just say you disagree.
Maybe from the client's point of view, although it's more likely a Tamagotchi. But from the server side, it’s more like a whole hippodrome where you need to support horse racing 24/7
This metaphor really captures the current state well. As someone building products with LLMs, the "you have to tell it where to turn" part resonates deeply.
I've found that the key is treating AI like a junior developer who's really fast but needs extremely clear instructions. The same way you'd never tell a junior dev "just build the feature" - you need to:
1. Break down the task into atomic steps
2. Provide explicit examples of expected output
3. Set up validation/testing for every response
4. Have fallback strategies when it inevitably goes off-road
The real productivity gains come when you build proper scaffolding around the "horse" - prompt templates, output validators, retry logic, human-in-the-loop for edge cases. Without that infrastructure, you're just hoping the horse stays on the path.
The "it eats a lot" point is also critical and often overlooked when people calculate ROI. API costs can spiral quickly if you're not careful about prompt engineering and caching strategies.
I see AI as an awesome technology, but also a like programming roulette.
It could go and do the task perfectly as instructed, or it could do something completely different that you haven't asked for and destroy everything in its path in the process.
I personally found that if you don't give it write access to anything that you can't easily restore and you review and commit code often it saves me a lot of time. It also makes the whole process more enjoyable, since it takes care of a lot of boilerplate for me.
It's definitely NOT intelligent, it's more like a glorified autocomplete but it CAN save a huge amount of time if used correctly.
Maybe the train is software that's built by SWEs (w/ or w/o AI help). Specifically built for going from A to B very fast. But not flexible, and takes a lot of effort to build and maintain.
I wrote this a long time ago, but I think the metaphor was about generative AI applications vs. traditional software applications, not about AI coding agents vs. writing code yourself.
Except when you want it to improve something in a particular way you already know about. Then god forbid it understands what you have asked and makes only that change :/
Some times I end up giving up trying to get the AI to build something following a particular architecture or fixing a particular problem in it's provious implementations.
I've always said that driving a car with modern driver assist features (lane centering / adaptive cruise / 'autopilot' style self-ish driving-ish) is like riding a horse. The early ones were like riding a short sighted, narcoleptic horse. Newer ones are improving but it's still like riding a horse, in that you give it high level instructions about where to go, rather than directly energising its muscles.
Horses have some semblance of self preservation and awareness of danger - see: jumping. LLMs do not have that at all so the analogy fails.
My term of “Automation Improved” is far more relevant and descriptive in current state of the art deployments. Same phone / text logic trees, next level macro-type agent work, none of it is free range. Horses can survive on their own. AI is a task helper, no more.
>LLMs do not have that at all so the analogy fails.
I somewhat disagree with this. AI doesn't have to worry about any kind of physical danger to itself, so it's not going to have any evolutionary function around that. If the linked Reddit thread is to be believed AI does have awareness of information hazards and attempts to rationalize around them.
Eh, this is getting pretty close to a type of binary thinking that breaks down under scrutiny. If, for example, we take any kind of selectively bred animal that requires human care for it's continued survival, does this somehow make said animal "improved automation"?
[+] [-] throw310822|1 month ago|reply
Bu the feeling I'm having with LLMs is that we've entered the age of fossil-fuel engines: something that moves on its own power and produces somewhat more than the user needs to put into it. Ok, in the current version it might not go very far and needs to be pushed now and then, but the total energy output is greater than what users need to put in. We could call it a horse, except that this is artificial: it's a tractor. And in the last months I've been feeling like someone who spent years pushing a plough in the fields, and has suddenly received a tractor. A primitive model, still imperfect, but already working.
[+] [-] simonw|1 month ago|reply
- some bicycle purists consider electric bicycles to be "cheating"
- you get less exercise from an electric bicycle
- they can get you places really effectively!
- if you don't know how to ride a bicycle an electric bicycle is going to quickly lead you to an accident
[+] [-] WarmWash|1 month ago|reply
Humans could handily beat computers at chess for a long time.
Then a massive supercomputer beat the reigning champion, but didn't win the tournament.
Then that computer came back and won the tournament a year later.
A few years later humans are collaborating in-game with these master chess engines to multiply their strength, becoming the dominant force in the human/computer chess world.
A few years after that though, the computers start beating the human/computer hybrid opponents.
And not long after that, humans started making the computer perform worse if they had a hand in the match.
The next few years have probably the highest probability since the cold war of being extreme inflection points in the timeline of human history.
[+] [-] kylec|1 month ago|reply
[+] [-] roughly|1 month ago|reply
[+] [-] cons0le|1 month ago|reply
If you tell it you want to go somewhere continents away, it will happily agree and drive you right into the ocean.
And this is before ads and other incentives make it worse.
[+] [-] tikhonj|1 month ago|reply
[+] [-] GTP|1 month ago|reply
[+] [-] ivanstojic|1 month ago|reply
> The lower-bound estimate represents 18 percent of the total reduction in man-hours in U.S. agriculture between 1944 and 1959; the upper-bound estimate, 27 percent
I'm not seeing that with LLMs.
[+] [-] maypop|1 month ago|reply
[+] [-] bitwize|1 month ago|reply
* You have to tell it which way to go every step of the way
* Odds are good it'll still drop you off at the wrong place
* You have to pay not only for being taken to the wrong place, but now also for the ride to get you where you wanted to go in the first place
[+] [-] agentultra|1 month ago|reply
[0] https://locusmag.com/feature/commentary-cory-doctorow-revers...
[+] [-] unknown|1 month ago|reply
[deleted]
[+] [-] oliwary|1 month ago|reply
My favorite quote from the excellent show halt and catch fire. Maybe applicable to AI too?
[+] [-] latexr|1 month ago|reply
https://youtube.com/watch?v=oeqPrUmVz-o&t=1m54s
> You’ve go to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.
[+] [-] ericmcer|1 month ago|reply
Remember when our job was to hide the ugly techniques we had to use from end users?
[+] [-] BoredomIsFun|1 month ago|reply
I found it very caricature, too saturated with romance - which is untypical for tech environment, much like "big bang theory".
[+] [-] threethirtytwo|1 month ago|reply
We should use analogies to point out the obvious thing everyone is avoiding:
Guys 3 years ago, AI wasn’t even a horse. It was a rock. The key is that it transformed into horse…. what will it be in the next 10 years?
AI is a terminator. A couple years back someone turned off read only mode. That’s the better analogy.
Pick an analogy that follows the trendline of continual change into the unknown future rather then an obvious analogy that keeps your ego and programming skills safe.
[+] [-] niam|1 month ago|reply
I suppose because they resemble the abstractions that make complex language possible. Another world full of aggressive posturing at tweet-length analogistic musings might have stifled some useful English parlance early.
But I reckon that we shouldn't have called it phishing because emails don't always smell.
[+] [-] GuB-42|1 month ago|reply
Because HN is like a child and analogies are like images
[+] [-] danw1979|1 month ago|reply
Pretty good for specific tasks.
Probably worth the input energy, when used in moderation.
Wear the right safety gear, but even this might not help with a kickback.
It's quite obvious to everyone nearby when you're using one.
[+] [-] beepbooptheory|1 month ago|reply
[+] [-] samtp|1 month ago|reply
Language is more of less a series of analogies. Comparing one thing to another is how humans are able to make sense of the world.
[+] [-] unknown|1 month ago|reply
[deleted]
[+] [-] georgestrakhov|1 month ago|reply
[+] [-] tetris11|1 month ago|reply
[+] [-] dmitrijbelikov|1 month ago|reply
[+] [-] MarceliusK|1 month ago|reply
[+] [-] MarceliusK|1 month ago|reply
[+] [-] jpalepu33|1 month ago|reply
I've found that the key is treating AI like a junior developer who's really fast but needs extremely clear instructions. The same way you'd never tell a junior dev "just build the feature" - you need to:
1. Break down the task into atomic steps 2. Provide explicit examples of expected output 3. Set up validation/testing for every response 4. Have fallback strategies when it inevitably goes off-road
The real productivity gains come when you build proper scaffolding around the "horse" - prompt templates, output validators, retry logic, human-in-the-loop for edge cases. Without that infrastructure, you're just hoping the horse stays on the path.
The "it eats a lot" point is also critical and often overlooked when people calculate ROI. API costs can spiral quickly if you're not careful about prompt engineering and caching strategies.
[+] [-] altern8|1 month ago|reply
It could go and do the task perfectly as instructed, or it could do something completely different that you haven't asked for and destroy everything in its path in the process.
I personally found that if you don't give it write access to anything that you can't easily restore and you review and commit code often it saves me a lot of time. It also makes the whole process more enjoyable, since it takes care of a lot of boilerplate for me.
It's definitely NOT intelligent, it's more like a glorified autocomplete but it CAN save a huge amount of time if used correctly.
[+] [-] MarceliusK|1 month ago|reply
[+] [-] unknown|1 month ago|reply
[deleted]
[+] [-] Eliezer|1 month ago|reply
[+] [-] skapadia|1 month ago|reply
[+] [-] drobinhood|1 month ago|reply
[+] [-] nomilk|1 month ago|reply
> (The horse) is way slower and less reliable than a train but can go more places
What does the 'train' represent here?
A guess: perhaps off-the-shelf software? - rigid, but much faster if it goes where (/ does what) you want it to.
[+] [-] spot5010|1 month ago|reply
Maybe the train is software that's built by SWEs (w/ or w/o AI help). Specifically built for going from A to B very fast. But not flexible, and takes a lot of effort to build and maintain.
[+] [-] easeout|1 month ago|reply
[+] [-] skybrian|1 month ago|reply
Another one I like is "Hungry ghosts in jars."
https://bsky.app/profile/hikikomorphism.bsky.social/post/3lw...
[+] [-] jonplackett|1 month ago|reply
[+] [-] eightys3v3n|1 month ago|reply
Some times I end up giving up trying to get the AI to build something following a particular architecture or fixing a particular problem in it's provious implementations.
[+] [-] jordemort|1 month ago|reply
[+] [-] Zardoz89|1 month ago|reply
[+] [-] davidhunter|1 month ago|reply
Horse rumours denied.
[+] [-] egeozcan|1 month ago|reply
[+] [-] Dilettante_|1 month ago|reply
[+] [-] taneq|1 month ago|reply
[+] [-] 6stringmerc|1 month ago|reply
My term of “Automation Improved” is far more relevant and descriptive in current state of the art deployments. Same phone / text logic trees, next level macro-type agent work, none of it is free range. Horses can survive on their own. AI is a task helper, no more.
[+] [-] pixl97|1 month ago|reply
I somewhat disagree with this. AI doesn't have to worry about any kind of physical danger to itself, so it's not going to have any evolutionary function around that. If the linked Reddit thread is to be believed AI does have awareness of information hazards and attempts to rationalize around them.
https://old.reddit.com/r/singularity/comments/1qjx26b/gemini...
>Horses can survive on their own.
Eh, this is getting pretty close to a type of binary thinking that breaks down under scrutiny. If, for example, we take any kind of selectively bred animal that requires human care for it's continued survival, does this somehow make said animal "improved automation"?