They are not eliminating job, you still have jobs in 1984 which is where we are heading to. You still need to hire someone to do the mass surveillance and policing, and enforcing the laws that are getting more and more draconian day by day. And you still need people to instigate-cough-motivate hate on something in order to keep the momentum of the society to shift the focus. Those still took labor but AI makes it easier.
We are indeed entering a post-job-scarity environment though. You see a lot of ghost posting and lack of response for years now, 6 out of 10 application is ghosted, 2 out of 10 said no, and just a few remaining. Jobs are getting rarer and are going to be more of a status rather than for breadwinning
No field is safe and trying to switch careers over 40 is almost impossible. Even flipping burgers is nearly impossible (very hard to do without pior experience at such age).
The elimination of jobs necessarily 'makes a path' to a post-work society. Post-work couldn't exist without it. Beyond that, it isn't in AI companies' power to shape economies and societies for post-work (which is what I assume you're really getting at here). All Altman, Amodei, Hassabis and the others can do is alert policymakers to what's coming, and they're trying pretty hard to do that, aren't they? - often in the teeth of the skepticism we see so much of on this site. Really if policy makers won't look ahead, the AI companies can't be blamed for the bumps we're going hit.
I haven't watched the whole interview. In the clip, a couple of things jump out:
1. He was speaking to a receptive audience. The head nods when he starts to make the comparison between the energy for bringing a human up to speed versus that for training an AI.
2. He is trying to rebut a _specific_ argument against his product, that it takes even more energy to do a task than a human does, once its training is priced in. He thinks that this is a fair comparison. The _fact_ that he thinks that this is a fair comparison is why I think it is too generous to say that this is just an offhand comment. Putting an LLM on an equal footing with a human, as if an LLM should have the same rights to the Earth as we do, is anti-human.
It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
> He’s clearly saying “lots of important things consume energy” not “let’s replace humans with GPUs” or “humans are wasteful too”.
When people have to interpret what you are saying, assuming that you are too intelligent and empathic to mean what you actually said, I think it says a lot.
"What he said is wrong, illogical and dangerous, but you have to forget it and consider that he probably meant this completely different thing that I will expose to you. Because he cannot be rich and powerful AND capable of expressing basic ideas on his own, what did you expect?"
I didn't read/hear it as reducing human life to 'training energy', but I don't like the comparison at the technical level.
Firstly, the math isn't even close. A human being consumes maybe 15 MWh of food energy from years 0 to 20. Modern frontier models take on the order of 100,000 MWh to train. It's a 10,000x difference. Furthermore, the human is actively doing 'inference' (living, acting, producing) during those 20 years of training and is also doings lots of non-brain stuff.
Besides the energy math, it's comparing apples-to-oranges. A human brain doesn't start out as a blank slate; it has billions of years of evolutionary priors for language and spatial reasoning that LLMs have to teach themselves from scratch, so this could explain why a human can do some things cheaper. Also, the learning material available to a human is inherently created to be easily ingested by a human brain, whereas a blank LLM needs to build the capacity to process that data.
Altman seems to hint at a comparison to the whole human evolution, but that seems unfair in the other direction, because humans and human evolution had to make discoveries from scratch and trial and error whereas LLMs get to ingest the final "good stuff". But either way you slice it, it's just not a good comparison, though not an 'inhuman' or immoral one.
A US resident consumes 76 MWh per year [0], so 1.52 GWh over 20 years. A single model can be trained once and used by millions. Therefore LLMs are ~10000x more energy efficient than humans.
Good question. It sounds like post-humanism, which, even in like left art circles was considered ‘interesting’ ten years ago (like post-antroposcene). These are not very useful terms so appreciate the nuance of ‘less valuable human’. It is not so catchy though, maybe we need to dig deeper. I am sure this has been discussed before.
I see some folks here defending Altman because it was an off-the-cuff remark in front of a receptive audience. But why does this make the comment acceptable? Would you give me if I talked about eating babies, but defended myself by saying that I was speaking to a receptive audience?
Most charitably, it's a dumb thing to say. It compares two unrelated things if you see the value of human life to be more than just answering prompts. Less charitably, the argument is evil: if he was trying to make a sincere apples-to-apples comparison, it implies that he doesn't value human life beyond the labor his company can automate.
I can understand edgy teenagers making arguments like that on LessWrong forums, but Altman ought to know better. He either doesn't, or he sincerely believes what the comment implies.
The problem I see is that in our society, CEOs are chosen for their ability to convince that they can increase productivity. Not for their ability to improve the life of people.
Just like the paperclip AI issue, CEOs are optimising for arbitrary metrics, and they are really good at that (because we select them precisely for that).
So obviously, as soon as you start wondering about how competent a CEO is at talking about life, you're in for a treat. He obviously has no idea about life. He is just a successful paperclip production machine.
What scares me is that we select those people for their ability to convince that they will generate money, in the hope that they will actually do that, and then we value their opinion about completely unrelated topics.
You may as well ask a curling professional athlete what they think about the problem of AI and energy. Not that they necessarily will say something as dumb as Altman of course, but you wouldn't behave as if they were experts in the field of... you know... the impact of energy on humanity and life in general.
What a depressing view of life. I don't expect him to take on some religious or philosophical view, but come on, how could you grow up somewhere wonderful, start a successful company with a lot of people you probably like and enjoy working with, have enough money to buy an island and still summarize life like that.
I prefer Richard Brandson's worldview. He's rich, but seeing the way he talks about his late wife and her memory warms my heart. I envy him for the human parts of his life, not just the success.
CEOs are a mix of scary, funny, innovative and naive persons. It's the first time an LLM is compared with a human in terms of energy. I will not comment the foolishness and superficiality of the quote. I would add that a human can meet another human and they make another human.
People dismiss this as a meme too quick but I think this is a good thought experiment not only for drawing a comparison for energy consumption but learning efficiency. AI is often criticized for its low learning efficiency but if you compare it to a human it's not looking too bad. Let's say a human becomes an AGI-level learner by the time they are 14yo. Human vision is approx. 500 megapixels and that is approx. 1.7 Gb per second of vision data. That means it takes approx. 800 PETABYTES of data to 'pre-train' a human to become a well-enough generalist learner. Take Llama 4 from Meta whose training data set consisted of 30 trillion tokens - his is equivalent of 120 Tb which is a mere 0.12 peta-bytes.
I am well aware this is a flimsy napkin math at best but I find comparing LLM models to humans with a more serious tone is fun and useful thought.
Sam Altman (and everyone else in the field) complain that estimates of water and power consumption of AI are wrong, but instead of just publishing that data they come up with this crap.
What data do you want to see published about water consumption? Here's Google's tiny tiny estimates[1], for example. AFAIK AI water usage has always been a made-up issue, spread by people who never realized before how much water is routinely spent by humanity.
The reductionism and comparison of a human life to a corporate product is disgusting but it's valuable to see how they truly see the world they are creating.
Their idea of a person's value seems to be less than the communist soviets at this point, nothing but work units.
I think this reveals a great deal about the thinking of the ruling elites.
The K shaped recovery phenomenon demonstrated that the economy can continue to thrive, when consumption by the lowest earners is replaced and concentrated by earners at the top. This demonstrated to the elites that actually, we don't need as many consumers to grow the economy, and that it's possible to redistribute wealth upward without losing growth.
These public comments just show that the elites are more and more comfortable making it explicit that there are too many "useless eaters" in their opinion, and that the change has been from considering just the Third World to be where these "useless eaters" are while still preserving an imperial core, to now considering everyone that isn't them, regardless of First or Third world, to be a useless eater.
Very dangerous thinking, but at least it's out in the open now.
They want to capture the entire value of everyone's labor and hoard it for themselves, and discard the people that produced it.
This is a profound category error. What Altman reduces to a 20-year 'training' cycle fueled by 'energy' is what we, in the actual world, call life. It is a stunningly hollow perspective that uses the language of industrial output to describe the human experience. While he is likely being provocative to keep his product at the center of the cultural conversation, it probably exposes something about him.
Exactly why we need to rid ourselves (by taxes) of billionaires. Those people have way too much power, and are often stupid dumbasses who just got rich randomly (right moment at the right place, or because their parents were rich in the first place), but are mostly spewing stupid lunacies
Elon Musk is perhaps the world’s most famous doom-monger and has repeatedly sounded the alarm about the possibility of super-smart machines wiping out humanity.
But Google founder Larry Page allegedly dismissed these fears as ‘speciesist’ during an argument at a Napa Valley party in 2015.
A top professor at the Massachusetts Institute of Technology (MIT) has claimed the two tech moguls clashed in a ‘long and spirited debate’ in the early hours of the morning.
In his book Life 3.0: Being Human In The Age of Artificial Intelligence, Max Tegmark wrote: ‘[Page’s] main concerns were that AI paranoia would delay the digital utopia and/or cause a military takeover of AI that would fall foul of Google’s “don’t be evil” slogan.
‘Elon kept pushing back and asked Larry to clarify details of his arguments, such as why he was so confident that digital life wouldn’t destroy everything we care about.
‘At times, Larry accused Elon of being “speciesist”: treating certain life forms as inferior just because they were silicon-based rather than carbon-based.’
- A human uses between 100W (naked human eating 2000kcal/day) to 10kW (first-world per capita energy consumption).
- Frontier models need something like 1-10 MW-years to train.
- Inference requires .1-1kW computers.
So it takes thousands of human-years to train a single model, but they run at around the same wall clock power consumption as a human. Depending on your personal opinion, they are also .1-1000x as a productive as the median human in how much useful work (or slop) they can produce per unit time.
The human brain also is a product of billions of years of evolution. We branched off from our common ancestor 7-9 million years ago. We encode quite a lot of structure and information that is essential for intelligence. The starting point of just our life time of training is incomplete.
If you calculate 100W * 7 million years * 365 = 255,500MW to train.
So he's comparing a human being to AI, finally showing what our AI overlords think of humanity: we're just wasteful resources to be replaced by more efficiency tools.
I’m not sure it’s possible to conclude what hey actually believes from public statements. I do not trust him to tell the truth about anything related to AI.
To be fair, it is not just him. There is an entire caste of people across the organizations that see employees as a problem. It is absolutely fascinating to watch, because those people tend to be somewhere in management class and appear to derive a fair amount of happiness from said managing ( and we can argue whether those skills are any good ).
He may well be as you say, but nothing in this video is evidence of that. To the extent he's a slimy sociopath, he's not openly twirling his metaphorical moustache here, and he's a lot better at hiding villainy than most of the better-known slimy sociopaths in the world today (for comparison, Musk actually tweeted "If this works, I’m treating myself to a volcano lair. It’s time.", this isn't even at that level.
He's responding to all the people very upset about how much energy AI takes to train.
That said, a quick over-estimate of human "training" cost is 2500 kcal/day * 20 years = 21.21 MWh[0], which is on the low end of the estimates I've seen for even one single 8 billion parameter model.
The AI "movement" is hermetic magick. The goal is to bring about God in silico, because if you're not involved in so doing, God may punish you for eternity when he emerges:
cors-fls|8 days ago
Unfortunately these companies are working to eliminate jobs, but not in any way making a path for a transition to a post-work society.
stevefan1999|8 days ago
We are indeed entering a post-job-scarity environment though. You see a lot of ghost posting and lack of response for years now, 6 out of 10 application is ghosted, 2 out of 10 said no, and just a few remaining. Jobs are getting rarer and are going to be more of a status rather than for breadwinning
iberator|8 days ago
AI is taking jobs faster than making new ones!
No field is safe and trying to switch careers over 40 is almost impossible. Even flipping burgers is nearly impossible (very hard to do without pior experience at such age).
squidbeak|8 days ago
UltraSane|8 days ago
erulabs|8 days ago
He’s clearly saying “lots of important things consume energy” not “let’s replace humans with GPUs” or “humans are wasteful too”.
If Altman is to blame for anything, it’s that AI is a scissor-generator extraordinaire.
throwyawayyyy|8 days ago
1. He was speaking to a receptive audience. The head nods when he starts to make the comparison between the energy for bringing a human up to speed versus that for training an AI.
2. He is trying to rebut a _specific_ argument against his product, that it takes even more energy to do a task than a human does, once its training is priced in. He thinks that this is a fair comparison. The _fact_ that he thinks that this is a fair comparison is why I think it is too generous to say that this is just an offhand comment. Putting an LLM on an equal footing with a human, as if an LLM should have the same rights to the Earth as we do, is anti-human.
It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
MattDaEskimo|8 days ago
palata|7 days ago
When people have to interpret what you are saying, assuming that you are too intelligent and empathic to mean what you actually said, I think it says a lot.
"What he said is wrong, illogical and dangerous, but you have to forget it and consider that he probably meant this completely different thing that I will expose to you. Because he cannot be rich and powerful AND capable of expressing basic ideas on his own, what did you expect?"
YurgenJurgensen|8 days ago
xnx|8 days ago
accounting2026|8 days ago
Firstly, the math isn't even close. A human being consumes maybe 15 MWh of food energy from years 0 to 20. Modern frontier models take on the order of 100,000 MWh to train. It's a 10,000x difference. Furthermore, the human is actively doing 'inference' (living, acting, producing) during those 20 years of training and is also doings lots of non-brain stuff. Besides the energy math, it's comparing apples-to-oranges. A human brain doesn't start out as a blank slate; it has billions of years of evolutionary priors for language and spatial reasoning that LLMs have to teach themselves from scratch, so this could explain why a human can do some things cheaper. Also, the learning material available to a human is inherently created to be easily ingested by a human brain, whereas a blank LLM needs to build the capacity to process that data. Altman seems to hint at a comparison to the whole human evolution, but that seems unfair in the other direction, because humans and human evolution had to make discoveries from scratch and trial and error whereas LLMs get to ingest the final "good stuff". But either way you slice it, it's just not a good comparison, though not an 'inhuman' or immoral one.
WithinReason|7 days ago
https://ourworldindata.org/energy-production-consumption#per...
ncr100|8 days ago
Edit: Or perhaps more correctly, "less valuable human". Which is more appropriate?
thenthenthen|8 days ago
lich_king|8 days ago
Most charitably, it's a dumb thing to say. It compares two unrelated things if you see the value of human life to be more than just answering prompts. Less charitably, the argument is evil: if he was trying to make a sincere apples-to-apples comparison, it implies that he doesn't value human life beyond the labor his company can automate.
I can understand edgy teenagers making arguments like that on LessWrong forums, but Altman ought to know better. He either doesn't, or he sincerely believes what the comment implies.
palata|7 days ago
Just like the paperclip AI issue, CEOs are optimising for arbitrary metrics, and they are really good at that (because we select them precisely for that).
So obviously, as soon as you start wondering about how competent a CEO is at talking about life, you're in for a treat. He obviously has no idea about life. He is just a successful paperclip production machine.
What scares me is that we select those people for their ability to convince that they will generate money, in the hope that they will actually do that, and then we value their opinion about completely unrelated topics.
You may as well ask a curling professional athlete what they think about the problem of AI and energy. Not that they necessarily will say something as dumb as Altman of course, but you wouldn't behave as if they were experts in the field of... you know... the impact of energy on humanity and life in general.
rspoerri|8 days ago
morkalork|7 days ago
Fricken|8 days ago
unknown|7 days ago
[deleted]
kylehotchkiss|8 days ago
I prefer Richard Brandson's worldview. He's rich, but seeing the way he talks about his late wife and her memory warms my heart. I envy him for the human parts of his life, not just the success.
dk1138|8 days ago
tsoukase|6 days ago
emregucerr|6 days ago
I am well aware this is a flimsy napkin math at best but I find comparing LLM models to humans with a more serious tone is fun and useful thought.
lccerina|6 days ago
stratos123|6 days ago
[1] https://cloud.google.com/blog/products/infrastructure/measur...
mhher|8 days ago
In that light Altman saying things things like that is not really surprising. Contrary it only reinforces their desperation to me.
juancn|8 days ago
A human at rest used ~100Wh, up to 400Wh for an elite athlete under effort.
So 20 years at 200Wh (I'm being generous here) ends up being 35MW, still cheaper, and inference is still at under 200Wh!
_DeadFred_|7 days ago
Their idea of a person's value seems to be less than the communist soviets at this point, nothing but work units.
xnx|8 days ago
sc68cal|8 days ago
The K shaped recovery phenomenon demonstrated that the economy can continue to thrive, when consumption by the lowest earners is replaced and concentrated by earners at the top. This demonstrated to the elites that actually, we don't need as many consumers to grow the economy, and that it's possible to redistribute wealth upward without losing growth.
These public comments just show that the elites are more and more comfortable making it explicit that there are too many "useless eaters" in their opinion, and that the change has been from considering just the Third World to be where these "useless eaters" are while still preserving an imperial core, to now considering everyone that isn't them, regardless of First or Third world, to be a useless eater.
Very dangerous thinking, but at least it's out in the open now.
They want to capture the entire value of everyone's labor and hoard it for themselves, and discard the people that produced it.
unknown|7 days ago
[deleted]
jmfldn|8 days ago
unknown|8 days ago
[deleted]
oulipo2|8 days ago
csallen|8 days ago
lysace|7 days ago
Context:
Elon Musk is perhaps the world’s most famous doom-monger and has repeatedly sounded the alarm about the possibility of super-smart machines wiping out humanity.
But Google founder Larry Page allegedly dismissed these fears as ‘speciesist’ during an argument at a Napa Valley party in 2015.
A top professor at the Massachusetts Institute of Technology (MIT) has claimed the two tech moguls clashed in a ‘long and spirited debate’ in the early hours of the morning.
In his book Life 3.0: Being Human In The Age of Artificial Intelligence, Max Tegmark wrote: ‘[Page’s] main concerns were that AI paranoia would delay the digital utopia and/or cause a military takeover of AI that would fall foul of Google’s “don’t be evil” slogan.
‘Elon kept pushing back and asked Larry to clarify details of his arguments, such as why he was so confident that digital life wouldn’t destroy everything we care about.
‘At times, Larry accused Elon of being “speciesist”: treating certain life forms as inferior just because they were silicon-based rather than carbon-based.’
(https://metro.co.uk/2018/05/02/elon-musks-fears-artificial-i...)
jethronethro|8 days ago
sxp|8 days ago
- A human uses between 100W (naked human eating 2000kcal/day) to 10kW (first-world per capita energy consumption).
- Frontier models need something like 1-10 MW-years to train.
- Inference requires .1-1kW computers.
So it takes thousands of human-years to train a single model, but they run at around the same wall clock power consumption as a human. Depending on your personal opinion, they are also .1-1000x as a productive as the median human in how much useful work (or slop) they can produce per unit time.
ncr100|8 days ago
Therefore its value is infinite. Therefore Altman's hypothesis is toilet paper thin.
cheeseblubber|8 days ago
If you calculate 100W * 7 million years * 365 = 255,500MW to train.
drcongo|8 days ago
dk1138|8 days ago
HeavyStorm|7 days ago
andsoitis|8 days ago
eli|8 days ago
iugtmkbdfil834|8 days ago
reactordev|8 days ago
dyauspitr|8 days ago
atomicnumber3|8 days ago
Why does it turn out they every single billionaire is also some combination of narcissist, pedophile, petty tyrant, or just utter freakazoid?
ben_w|8 days ago
He's responding to all the people very upset about how much energy AI takes to train.
That said, a quick over-estimate of human "training" cost is 2500 kcal/day * 20 years = 21.21 MWh[0], which is on the low end of the estimates I've seen for even one single 8 billion parameter model.
[0] https://www.wolframalpha.com/input?i=2500+kcal%2Fday+*+20+ye...
bitwize|8 days ago
https://en.wikipedia.org/wiki/Roko's_basilisk
Next to the might and terror of the machine God, mere humans are, individually, indeed as nothing...
DemocracyFTW2|7 days ago
unknown|8 days ago
[deleted]
add-sub-mul-div|8 days ago
heliumtera|8 days ago
We only care about pelicans riding bicycles