(no title)
shmatt | 10 months ago
Now we're up to o4, AGI is still not even in near site (depending on your definition, I know). And OpenAI is up to about 5000 employees. I'd think even before AGI a new model would be able to cover for at least 4500 of those employees being fired, is that not the case?
pants2|10 months ago
steamrolled|10 months ago
OpenAI is at a much earlier stage in their adventures and probably doesn't have that much baggage. Given their age and revenue streams, their headcount is quite substantial.
shmatt|10 months ago
Not directly from OpenAI - but people in the industry is advertising how these advanced models can replace employees, yet they keep on going on hiring tears (including OpenAI). Lets see the first company to stand behind their models, and replace 50% of their existing headcount with agents. That to me would be a sign these things are going to replace peoples jobs. Until I see that, if OpenAI can't figure out how to replace humans with models, then no one will
I mean could you imagine if todays announcement was - the chatgpt.com webdev team has been laid off, and all new features and fixes will be complete by Codex CLI + o4-mini. That means they believe in the product theyre advertising. Until they do something like that, theyll keep on trusting those human engineers and try selling other people on the dream
scarface_74|10 months ago
Or maybe it’s just nonsensical to compare the number of employees across companies - especially when they don’t do nearly the same thing.
On a related note, wait until you find out how many more employees that Apple has than Google since Apple has hundreds of retail employees.
throwanem|10 months ago
[deleted]
fsndz|10 months ago
Deep learning models will continue to improve as we feed them more data and use more compute, but they will still fail at even very simple tasks as long as the input data are outside their training distribution. The numerous examples of ChatGPT (even the latest, most powerful versions) failing at basic questions or tasks illustrate this well. Learning from data is not enough; there is a need for the kind of system-two thinking we humans develop as we grow. It is difficult to see how deep learning and backpropagation alone will help us model that. https://medium.com/thoughts-on-machine-learning/why-sam-altm...
stavros|10 months ago
So at least two years old?
throwanem|10 months ago
Oh, not that I haven't been as knocked about in the interim, of course. I'm not really claiming I'm better, and these are frightening times; I hope I'm neither projecting nor judging too harshly. But even trying to discount for the possibility, there still seems something new left to explain.
bananaflag|10 months ago
irthomasthomas|10 months ago
doug_durham|10 months ago
boznz|10 months ago
kurthr|10 months ago
chrsw|10 months ago
But, we don’t need AGI/AHI to transform large parts of our civilization. And I’m not seeing this happen either.
BosunoB|10 months ago
This is the ai-2027.com argument. LLMs only really have to get good enough at coding (and then researching), and it's singularity time.
chpatrick|10 months ago
MoonGhost|10 months ago
It's not only definition. Some googler was sure their model was conscious.
actsasbuffoon|10 months ago
They’d happily lose a queen to take a pawn. They failed to understand how pieces are even allowed to move, hallucinated the existence of new pieces, repeatedly declared checkmate when it wasn’t, etc.
I tried it last night with Gemini 2.5 Pro and it made it 6 turns before it started making illegal moves, and 8 turns before it got so confused about the state of the board before it refused to play with me any longer.
I was in the chess club in 3rd grade. One of the top ranked LLMs in the world is vastly dumber than I was in 3rd grade. But we’re going to pour hundreds of billions into this in the hope that it can end my career? Good luck with that, guys.
JFingleton|10 months ago
An Alpha Star type model would wipe the floor at chess.
schindlabua|10 months ago
I remember being extremely surprised when I could ask GPT3 to rotate a 3d model of a car in it's head and ask it about what I would see when sitting inside, or which doors would refuse to open because they're in contact with the ground.
It really depends on how much you want to shift the goalposts on what constitutes "simple".
code_biologist|10 months ago
famouswaffles|10 months ago
The best model you can play with is decent for a human - https://github.com/adamkarvonen/chess_gpt_eval
SOTA models can't play it because these companies don't really care about it.
LinuxAmbulance|10 months ago
I wonder if any of the people that quit regret doing so.
Seems a lot like Chicken Little behavior - "Oh no, the sky is falling!"
How anyone with technical acumen thinks current AI models are conscious, let alone capable of writing new features and expanding their abilities is beyond me. Might as well be afraid of calculators revolting and taking over the world.