top | item 46686402

AI is a horse (2024)

469 points| zdw | 1 month ago |kconner.com

238 comments

order
[+] throw310822|1 month ago|reply
Famously Steve Jobs said that the (personal) computer is "like a bicycle for the mind". It's a great metaphor because- besides the idea of lightness and freedom it communicates- it also described the computer as multiplier of the human strength- the bicycle allows one to travel faster and with much less effort, it's true, but ultimately the source of its power is still entirely in the muscles of the cyclist- you don't get out of it anything that you didn't put yourself.

Bu the feeling I'm having with LLMs is that we've entered the age of fossil-fuel engines: something that moves on its own power and produces somewhat more than the user needs to put into it. Ok, in the current version it might not go very far and needs to be pushed now and then, but the total energy output is greater than what users need to put in. We could call it a horse, except that this is artificial: it's a tractor. And in the last months I've been feeling like someone who spent years pushing a plough in the fields, and has suddenly received a tractor. A primitive model, still imperfect, but already working.

[+] simonw|1 month ago|reply
I've been calling LLMs "electric bicycles for the mind", inspired by that Jobs quote.

- some bicycle purists consider electric bicycles to be "cheating"

- you get less exercise from an electric bicycle

- they can get you places really effectively!

- if you don't know how to ride a bicycle an electric bicycle is going to quickly lead you to an accident

[+] WarmWash|1 month ago|reply
I think there is a legitimate fear that is born from what happened with Chess.

Humans could handily beat computers at chess for a long time.

Then a massive supercomputer beat the reigning champion, but didn't win the tournament.

Then that computer came back and won the tournament a year later.

A few years later humans are collaborating in-game with these master chess engines to multiply their strength, becoming the dominant force in the human/computer chess world.

A few years after that though, the computers start beating the human/computer hybrid opponents.

And not long after that, humans started making the computer perform worse if they had a hand in the match.

The next few years have probably the highest probability since the cold war of being extreme inflection points in the timeline of human history.

[+] kylec|1 month ago|reply
A tractor does exactly what you tell it to do though - you turn it on, steer it in a direction, and it goes. I like the horse metaphor for AI better: still useful, but sometimes unpredictable, and needs constant supervision.
[+] roughly|1 month ago|reply
It’s sort of interesting to look back at ~100 years of the automobile and, eg, the rise of new urbanism in this metaphor - there are undoubtedly benefits that have come from the automobile, and also the efforts to absolutely maximize where, how, and how often people use their automobile have led to a whole lot of unintended negative consequences.
[+] cons0le|1 month ago|reply
Its like a motor bike, except it doesn't take you where you steer. It take you where it wants to take you.

If you tell it you want to go somewhere continents away, it will happily agree and drive you right into the ocean.

And this is before ads and other incentives make it worse.

[+] tikhonj|1 month ago|reply
Fossil-fuel cars a good analogy because, for all their raw power and capability, living in a polluted, car-dominated world sucks. The problem with modern AI has more to do with modernism than with AI.
[+] GTP|1 month ago|reply
Depends who you listen to. There are developers reporting significant gains from the use of AI, others saying that it doesn't really impact their work, and then there was some research saying that time savings due to the use of AI in developing software are only an illusion, because while developers were feeling more productive they were actually slower. I guess only time will tell who's right or if it is just a matter of using the tool in the right way.
[+] ivanstojic|1 month ago|reply
When tractors were invented, there was a notable reduction in human employment in agriculture in the USA. From a research paper (https://faculty.econ.ucdavis.edu/faculty/alolmstead/Recent_P...):

> The lower-bound estimate represents 18 percent of the total reduction in man-hours in U.S. agriculture between 1944 and 1959; the upper-bound estimate, 27 percent

I'm not seeing that with LLMs.

[+] maypop|1 month ago|reply
Having recently watched Train Dreams it feels like the transition of logging by hand to logging with industrial machinery.
[+] bitwize|1 month ago|reply
AI is a Boston taxicab:

* You have to tell it which way to go every step of the way

* Odds are good it'll still drop you off at the wrong place

* You have to pay not only for being taken to the wrong place, but now also for the ride to get you where you wanted to go in the first place

[+] agentultra|1 month ago|reply
I prefer Doctorow's observation that they make us into reverse-centaurs [0]. We're not leading the LLM around like some faithful companion that doesn't always do what we want it to. We're the last-mile delivery driver of an algorithm running in a data-center that can't take responsibility for and ship the code to production on its own. We're the horse.

[0] https://locusmag.com/feature/commentary-cory-doctorow-revers...

[+] oliwary|1 month ago|reply
"Computers aren't the thing. They're the thing that gets you to the thing."

My favorite quote from the excellent show halt and catch fire. Maybe applicable to AI too?

[+] latexr|1 month ago|reply
Something like that used to be Apple’s driving force under Steve Jobs (definitely no longer under Tim Cook).

https://youtube.com/watch?v=oeqPrUmVz-o&t=1m54s

> You’ve go to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.

[+] ericmcer|1 month ago|reply
I am really looking forward to that idea catching up with AI. Right now AI is the thing and the products it enables are secondary.

Remember when our job was to hide the ugly techniques we had to use from end users?

[+] BoredomIsFun|1 month ago|reply
> excellent show "halt and catch fire".

I found it very caricature, too saturated with romance - which is untypical for tech environment, much like "big bang theory".

[+] threethirtytwo|1 month ago|reply
Why does HN love analogies? You can pick any animal or thing and it can fit in some way. Horse is a docile safe analogy it’s also the most obvious analogy. Like yes the world gets it LLMs have limitations thanks for sharing, we know it’s not as good as a programmer.

We should use analogies to point out the obvious thing everyone is avoiding:

Guys 3 years ago, AI wasn’t even a horse. It was a rock. The key is that it transformed into horse…. what will it be in the next 10 years?

AI is a terminator. A couple years back someone turned off read only mode. That’s the better analogy.

Pick an analogy that follows the trendline of continual change into the unknown future rather then an obvious analogy that keeps your ego and programming skills safe.

[+] niam|1 month ago|reply
> Why does HN love analogies?

I suppose because they resemble the abstractions that make complex language possible. Another world full of aggressive posturing at tweet-length analogistic musings might have stifled some useful English parlance early.

But I reckon that we shouldn't have called it phishing because emails don't always smell.

[+] GuB-42|1 month ago|reply
> Why does HN love analogies?

Because HN is like a child and analogies are like images

[+] danw1979|1 month ago|reply
How about "AI is a chainsaw" ?

Pretty good for specific tasks.

Probably worth the input energy, when used in moderation.

Wear the right safety gear, but even this might not help with a kickback.

It's quite obvious to everyone nearby when you're using one.

[+] beepbooptheory|1 month ago|reply
If an analogy is an "obvious" analogy that makes it definitionally a good analogy, right? Either way: don't see why you gotta be so prescriptive about it one way or the other! You can just say you disagree.
[+] samtp|1 month ago|reply
AI is an analogy to something that people feel the technology is similar to but that it is obviously not.

Language is more of less a series of analogies. Comparing one thing to another is how humans are able to make sense of the world.

[+] tetris11|1 month ago|reply
It's also a big bloatey gas bag that needs constant de-farting to function
[+] dmitrijbelikov|1 month ago|reply
Maybe from the client's point of view, although it's more likely a Tamagotchi. But from the server side, it’s more like a whole hippodrome where you need to support horse racing 24/7
[+] MarceliusK|1 month ago|reply
It's a nice reminder that most metaphors break unless you ask whose perspective they're describing
[+] MarceliusK|1 month ago|reply
Anyone claiming the horse understands the journey, or worse, wants to take you somewhere, is selling mythology
[+] jpalepu33|1 month ago|reply
This metaphor really captures the current state well. As someone building products with LLMs, the "you have to tell it where to turn" part resonates deeply.

I've found that the key is treating AI like a junior developer who's really fast but needs extremely clear instructions. The same way you'd never tell a junior dev "just build the feature" - you need to:

1. Break down the task into atomic steps 2. Provide explicit examples of expected output 3. Set up validation/testing for every response 4. Have fallback strategies when it inevitably goes off-road

The real productivity gains come when you build proper scaffolding around the "horse" - prompt templates, output validators, retry logic, human-in-the-loop for edge cases. Without that infrastructure, you're just hoping the horse stays on the path.

The "it eats a lot" point is also critical and often overlooked when people calculate ROI. API costs can spiral quickly if you're not careful about prompt engineering and caching strategies.

[+] altern8|1 month ago|reply
I see AI as an awesome technology, but also a like programming roulette.

It could go and do the task perfectly as instructed, or it could do something completely different that you haven't asked for and destroy everything in its path in the process.

I personally found that if you don't give it write access to anything that you can't easily restore and you review and commit code often it saves me a lot of time. It also makes the whole process more enjoyable, since it takes care of a lot of boilerplate for me.

It's definitely NOT intelligent, it's more like a glorified autocomplete but it CAN save a huge amount of time if used correctly.

[+] MarceliusK|1 month ago|reply
The safety practices you describe are basically the right mental model: assume it's fallible, keep writes reversible, review everything, commit often
[+] Eliezer|1 month ago|reply
"2024 AI was a horse". People really like to imagine that the last 6 months constitute their true observation of the new eternal state of the future.
[+] skapadia|1 month ago|reply
Exactly. We're headed for a discontinuity, not an inflection point.
[+] nomilk|1 month ago|reply
The metaphor makes sense in comparing a human walking (SWE w/o AI) to a human riding on a horse (SWE w/ AI), except for:

> (The horse) is way slower and less reliable than a train but can go more places

What does the 'train' represent here?

A guess: perhaps off-the-shelf software? - rigid, but much faster if it goes where (/ does what) you want it to.

[+] spot5010|1 month ago|reply
I had the same question.

Maybe the train is software that's built by SWEs (w/ or w/o AI help). Specifically built for going from A to B very fast. But not flexible, and takes a lot of effort to build and maintain.

[+] easeout|1 month ago|reply
I wrote this a long time ago, but I think the metaphor was about generative AI applications vs. traditional software applications, not about AI coding agents vs. writing code yourself.
[+] jonplackett|1 month ago|reply
All true apart you can only lead it to water - it drinks ALL the water regardless of anything else.
[+] eightys3v3n|1 month ago|reply
Except when you want it to improve something in a particular way you already know about. Then god forbid it understands what you have asked and makes only that change :/

Some times I end up giving up trying to get the AI to build something following a particular architecture or fixing a particular problem in it's provious implementations.

[+] taneq|1 month ago|reply
I've always said that driving a car with modern driver assist features (lane centering / adaptive cruise / 'autopilot' style self-ish driving-ish) is like riding a horse. The early ones were like riding a short sighted, narcoleptic horse. Newer ones are improving but it's still like riding a horse, in that you give it high level instructions about where to go, rather than directly energising its muscles.
[+] 6stringmerc|1 month ago|reply
Horses have some semblance of self preservation and awareness of danger - see: jumping. LLMs do not have that at all so the analogy fails.

My term of “Automation Improved” is far more relevant and descriptive in current state of the art deployments. Same phone / text logic trees, next level macro-type agent work, none of it is free range. Horses can survive on their own. AI is a task helper, no more.

[+] pixl97|1 month ago|reply
>LLMs do not have that at all so the analogy fails.

I somewhat disagree with this. AI doesn't have to worry about any kind of physical danger to itself, so it's not going to have any evolutionary function around that. If the linked Reddit thread is to be believed AI does have awareness of information hazards and attempts to rationalize around them.

https://old.reddit.com/r/singularity/comments/1qjx26b/gemini...

>Horses can survive on their own.

Eh, this is getting pretty close to a type of binary thinking that breaks down under scrutiny. If, for example, we take any kind of selectively bred animal that requires human care for it's continued survival, does this somehow make said animal "improved automation"?