(no title)
erulabs | 7 days ago
He’s clearly saying “lots of important things consume energy” not “let’s replace humans with GPUs” or “humans are wasteful too”.
If Altman is to blame for anything, it’s that AI is a scissor-generator extraordinaire.
erulabs | 7 days ago
He’s clearly saying “lots of important things consume energy” not “let’s replace humans with GPUs” or “humans are wasteful too”.
If Altman is to blame for anything, it’s that AI is a scissor-generator extraordinaire.
throwyawayyyy|7 days ago
1. He was speaking to a receptive audience. The head nods when he starts to make the comparison between the energy for bringing a human up to speed versus that for training an AI.
2. He is trying to rebut a _specific_ argument against his product, that it takes even more energy to do a task than a human does, once its training is priced in. He thinks that this is a fair comparison. The _fact_ that he thinks that this is a fair comparison is why I think it is too generous to say that this is just an offhand comment. Putting an LLM on an equal footing with a human, as if an LLM should have the same rights to the Earth as we do, is anti-human.
It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
nozzlegear|7 days ago
> It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
Exactly. Perhaps in Altman's world, a human exists specifically to do tasks for him. But in reality, that human was always going to exist and was going to use those 20 years of energy anyway; they only happened to be employed by his rich ass when he wanted them to do a task. It's not equivalent to burning energy on training an LLM to do that task.
ncr100|7 days ago
AFAIK CEOs jobs include to set vision.
This example sets a post human/less valuable human paradigm.
ben_w|7 days ago
I don't see him calling for an LLM to have rights. I don't think this is part of how OpenAI considers its work at all. Anthropic is open-minded about the possibility, but OpenAI is basically "this is a thing, not a person, do not mistake it for a person".
> It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
His point is flawed in other ways, like the limited competence of the AI and how even an adult human eating food for 20 years has an energy cost on the low end of the estimated energy cost to train a very small and very rubbish LLM, and nowhere near the energy cost of training one that anyone would care about. And even for those fancy models, they're only ok, not great, etc., and there are lots of models being trained rather than this being a one-time thing. Or in the other direction, each human needs to be trained separately and there's 8 billion of us. And what he says in the video doesn't help much either, it's vibes rather than analysis.
But your point here is the wrong thing to call a flaw.
The human is here anyway? First, no: *some* humans are here anyway, but various governments are currently increasing pension ages due to the insufficient number of new humans available to economically support people who are claiming pensions.
Second: so what if it was yes? That argument didn't stop us substituting combustion engines and hydraulics for human muscle.
unknown|7 days ago
[deleted]
MattDaEskimo|7 days ago
palata|6 days ago
When people have to interpret what you are saying, assuming that you are too intelligent and empathic to mean what you actually said, I think it says a lot.
"What he said is wrong, illogical and dangerous, but you have to forget it and consider that he probably meant this completely different thing that I will expose to you. Because he cannot be rich and powerful AND capable of expressing basic ideas on his own, what did you expect?"
cap11235|6 days ago
YurgenJurgensen|7 days ago
xnx|7 days ago