I'm not a fan of the glib "everyone knows AI systems don't really think, they are just stochastic parrots, all they do is regurgitate ideas they've stolen" schtick, but this article is the reverse of that only worse.
Today's AI systems are pretty impressive but they are absolutely not, not even slightly, the equivalent of Einstein + Hawking + Tao. The reason they get used a lot for tasks along the lines of "rewrite this so it sounds smarter" is that that's what they're best at.
If we did as the author seems to want and tried to use these systems to solve the kinds of problems we need Einsteins, Hawkings and Taos for, then we would be in for one miserable disappointment after another. Maybe some day -- maybe some day very soon -- they'll be able to do that, but not now.
An article proclaiming that today's AI systems are at the level of Einstein mostly suggests to me that the author's own intellectual level isn't much higher than that of the AI systems he falsely equates with them. That seems unlikely, but I don't have a better explanation for how someone could write something so very far from the truth.
> If we […] tried to use these systems to solve the kinds of problems we need Einsteins, Hawkings and Taos for, then we would be in for one miserable disappointment after another
We can literally watch Terence Tao himself vibe coding formal proofs using Claude and o4. He doesn’t seem too disappointed.
> The reason they get used a lot for tasks along the lines of "rewrite this so it sounds smarter" is that that's what they're best at.
I disagree. The reason is that that's what aligns best with what most people are looking for help on.
There is a disconnect between reality and the AI product consumer envisioned here. There is no magical enlightened user who's going to unleash their inner potential.
How much physics or math does the average person know? How much do you think they even WANT to know? The answer is surprisingly little.
On a day-to-day basis the layman writes emails and other mundane tasks, and wants to do them faster and easier.
Having a squad of geniuses in my pocket doesn't pay my bills.
I agree 100%. Additionally, this article ignores the existence of Google. Even the high level questions the person asked Einstein before he devolved to asking for email editing help were things you could have just googled.
The greatness of great minds was how they thought about problems and how they changed how we thought about things. An AI cannot do that. It's designed to tell you what people combined have already agreed upon. It's not designed to break the frontier of our knowledge
There’s a little nagging thought in my head when I hear that some people are helped immensely by AI and others are not. It’s that there is a threshold for intelligence that the AI either impresses you or it does not. I’m sure this threshold will continue to rise
This blog post started off sounding like it was about the plight of highly intellectual and motivated engineers hired to work on very mundane tasks. If we can abuse people like this, why not a computer? After all, it's not even alive.
We don't know why we experience things. It's bizarre that we do. Nothing in our understanding of the universe gives any indication that a bunch of atoms thrown together by cosmological processes and then assembled into self-replicating patterns by evolution should be able to experience what is happening to them.
Sure, a computer or an LLM isn't alive, but we have no idea if "being alive" is what is required for conscious experience.
The only argument I have for believing that other human beings experience things is that it would be extremely improbable if I was the only one, and the other mechanistic automatons looked and talked like me but didn't experience like me. I can see that humans are animals, so the common origin of animals and our cognitive and behavioral similarities give us good reason to believe that other complex animals experience things, though possibly radically differently.
None of that gives us any clue what the necessary and sufficient conditions for conscious experience are, so it doesn't give us any clue whether a computer or a running LLM instance would experience its existence.
My read was that he's sad that people aren't using these tools to advance their own intellectual capabilities. If people are actually only using them the way he describes, to improve their shopping lists etc., I think that is a bit sad.
You're missing the part where despite your rent changing 0 to 20 to 200, housing the three of them actually costs 2000, and they continue operating at a loss in the hopes they can boil-the-frog until you turn them a profit.
On r/AskPhysics you'll see people post AI-made crank theories every day. I assume there have been even more, as the mods constantly remove AI posts. So why would I let AI teach me physics?
AI is best at things you already know, or at least used to know. Like you know a foreign dish but you forget the exact name, or an idiom on the tip of your tongue.
Nothing wrong with the situation. At some point in history, humans did not need to spend their entire time in finding food, raising kids, taking care of family and community etc. So they got into services business, selling services to each other. One kid polishes a fine pebble and exchanges it with the other kid for a nicely carved wood piece. Their elders don't see value in any of these and shout at them to go and hunt for more food. But the services thrived, outpacing the real needs of the humans. Technologies and tools evolved claiming magical abilities. Sane humans only care about their basic needs. So they just use the magical tech for the basic needs, which makes perfect sense.
There are no "digital gods" only the super-powered autocorrect people call "ai". They can't make new stuff. They can't solve novel problems no human has solved before, though they _can_, with the correct setup, brute-force solutions to understandable problems by throwing everything at it until something sticks.
They don't learn. They don't teach. They are not the deities that are presented here. This article is fantasy, projected from real circumstances, by an over-active imagination.
I'm curious to know what the author suggests we do. Elect Claude, ChatGPT, and Gemini as our leaders? Put these future cancer cure discoverers in a hospital and get them to work curing individual cancer patients?
I would suggest paying the $20/month rent, and trying to use ChatGPT o3/o4-mini-high/o1-pro as a tutor, to help you understand something you're curious about but never really had time or energy to dig into before. It's pretty glorious, and a straight-up pedagogical revolution, IMO.
>This is three geniuses for the price of a gym membership
Geniuses? Come on. Let's talk when a LLM is central to a new development in HEP or math. I mean central, like a paradigm shift kind of thing, directly from the AI. A quantum gravity theory, a brand new branch of math, a new approach to a unsolved conjecture, whatever. That's what geniuses do. Not repeating what you can already read in a book! This kind of thing says more about people's ignorance and how impressionable they are than the actual capabilities of the tech. If you think that AI text and image generation _creativity_ can be translated to hard things like math, oh boy.
gjm11|9 months ago
Today's AI systems are pretty impressive but they are absolutely not, not even slightly, the equivalent of Einstein + Hawking + Tao. The reason they get used a lot for tasks along the lines of "rewrite this so it sounds smarter" is that that's what they're best at.
If we did as the author seems to want and tried to use these systems to solve the kinds of problems we need Einsteins, Hawkings and Taos for, then we would be in for one miserable disappointment after another. Maybe some day -- maybe some day very soon -- they'll be able to do that, but not now.
An article proclaiming that today's AI systems are at the level of Einstein mostly suggests to me that the author's own intellectual level isn't much higher than that of the AI systems he falsely equates with them. That seems unlikely, but I don't have a better explanation for how someone could write something so very far from the truth.
jw1224|9 months ago
We can literally watch Terence Tao himself vibe coding formal proofs using Claude and o4. He doesn’t seem too disappointed.
https://youtu.be/zZr54G7ec7A?si=GpRZK5W1LDvWyBBw
personjerry|9 months ago
I disagree. The reason is that that's what aligns best with what most people are looking for help on.
There is a disconnect between reality and the AI product consumer envisioned here. There is no magical enlightened user who's going to unleash their inner potential.
How much physics or math does the average person know? How much do you think they even WANT to know? The answer is surprisingly little.
On a day-to-day basis the layman writes emails and other mundane tasks, and wants to do them faster and easier.
Having a squad of geniuses in my pocket doesn't pay my bills.
aprilthird2021|9 months ago
The greatness of great minds was how they thought about problems and how they changed how we thought about things. An AI cannot do that. It's designed to tell you what people combined have already agreed upon. It's not designed to break the frontier of our knowledge
justonceokay|9 months ago
jxjnskkzxxhx|9 months ago
Oh is that what the point of the article was? That is so stupid that it didnt even cross my mind.
Scarblac|9 months ago
glitchc|9 months ago
dkarl|9 months ago
Sure, a computer or an LLM isn't alive, but we have no idea if "being alive" is what is required for conscious experience.
The only argument I have for believing that other human beings experience things is that it would be extremely improbable if I was the only one, and the other mechanistic automatons looked and talked like me but didn't experience like me. I can see that humans are animals, so the common origin of animals and our cognitive and behavioral similarities give us good reason to believe that other complex animals experience things, though possibly radically differently.
None of that gives us any clue what the necessary and sufficient conditions for conscious experience are, so it doesn't give us any clue whether a computer or a running LLM instance would experience its existence.
AlexCoventry|9 months ago
parliament32|9 months ago
raincole|9 months ago
On r/AskPhysics you'll see people post AI-made crank theories every day. I assume there have been even more, as the mods constantly remove AI posts. So why would I let AI teach me physics?
AI is best at things you already know, or at least used to know. Like you know a foreign dish but you forget the exact name, or an idiom on the tip of your tongue.
zkmon|9 months ago
gopher_space|9 months ago
firefoxd|9 months ago
davydm|9 months ago
There are no "digital gods" only the super-powered autocorrect people call "ai". They can't make new stuff. They can't solve novel problems no human has solved before, though they _can_, with the correct setup, brute-force solutions to understandable problems by throwing everything at it until something sticks.
They don't learn. They don't teach. They are not the deities that are presented here. This article is fantasy, projected from real circumstances, by an over-active imagination.
trehalose|9 months ago
AlexCoventry|9 months ago
lijok|9 months ago
pedrocr|9 months ago
Doesn't have quite the same ring as a lament
aeve890|9 months ago
Geniuses? Come on. Let's talk when a LLM is central to a new development in HEP or math. I mean central, like a paradigm shift kind of thing, directly from the AI. A quantum gravity theory, a brand new branch of math, a new approach to a unsolved conjecture, whatever. That's what geniuses do. Not repeating what you can already read in a book! This kind of thing says more about people's ignorance and how impressionable they are than the actual capabilities of the tech. If you think that AI text and image generation _creativity_ can be translated to hard things like math, oh boy.
thr0waway001|9 months ago
ausbah|9 months ago