When I look at how far tech has come in my own life, I'm mid 50's, I don't think the singularity is out of the question in my kids life, or even my own if I'm lucky. When I born there was no such thing as a PC or the internet.
As far as I'm aware, the only missing step is for the llms to be able to roll the results of a test back into its training set. It can then start proposing hypotheses and testing them. Then it can do logic.
I don't understand the skepticism. LLMs are already a lot smarter than me, all they need is the ability to learn.
** Wikipedia definition of singularity. "an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase ("explosion") in intelligence that culminates in a powerful superintelligence, far surpassing all human intelligence.[4]"
That's highly doubtful, not unless your definition of intelligence requires volume of regurgiting information and not contextualizing and building on such knowledge. LLMs are "smart" in the same way a person who gets 1600 on the SAT* is "smart". If you spend your time minmaxing towards a specific task, you get very good at it. That skill can even get you as far in life as being a subject matter expert. But that's not why humans are "inelligent" in my eyes.
*yes, there is correlation. Because people who take the time to study and memorize for a test tend to have better work habits than those that don't. But there's a reason why some of those kinds of students can end up completely lost in college despite their diligence to study.
>I don't understand the skepticism.
To be frank, we're in a time where grifts are running wild and grifters are running away red handed. Inside and outside of tech. I am very septical by default in 2025 for anyone who talks in terms of "what can happen" and not what is actually practical or possible.
Until now computing was running on a completely different model of implied reliability. The base hardware is supposed to be as reliable as possible, software is supposed to mostly work and bugs are tolerated because they're hard to fix. No one is suggesting they're a good thing.
LLMs are more like something that looks like a text only web-browser, but you have no idea if it's producing genius or gibberish. "Just ignore the mistakes, if you can be bothered to check if they're there" is quite the marketing pitch.
The biggest development in tech has been the change in culture - from utopian libertarian "Give everyone a bicycle for the mind and watch the joy" to the corporate cynicism of "Collect as much personal information as you can get away with, and use it to modify behaviour, beliefs, and especially spending and voting, to maximise corporate profits and extreme wealth."
While the technology has developed, the values have run headlong in the opposite direction.
It's questionable if a culture with these values is even capable of creating a singularity without destroying itself first.
jay_kyburz|7 months ago
As far as I'm aware, the only missing step is for the llms to be able to roll the results of a test back into its training set. It can then start proposing hypotheses and testing them. Then it can do logic.
I don't understand the skepticism. LLMs are already a lot smarter than me, all they need is the ability to learn.
** Wikipedia definition of singularity. "an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase ("explosion") in intelligence that culminates in a powerful superintelligence, far surpassing all human intelligence.[4]"
johnnyanmac|7 months ago
That's highly doubtful, not unless your definition of intelligence requires volume of regurgiting information and not contextualizing and building on such knowledge. LLMs are "smart" in the same way a person who gets 1600 on the SAT* is "smart". If you spend your time minmaxing towards a specific task, you get very good at it. That skill can even get you as far in life as being a subject matter expert. But that's not why humans are "inelligent" in my eyes.
*yes, there is correlation. Because people who take the time to study and memorize for a test tend to have better work habits than those that don't. But there's a reason why some of those kinds of students can end up completely lost in college despite their diligence to study.
>I don't understand the skepticism.
To be frank, we're in a time where grifts are running wild and grifters are running away red handed. Inside and outside of tech. I am very septical by default in 2025 for anyone who talks in terms of "what can happen" and not what is actually practical or possible.
TheOtherHobbes|7 months ago
LLMs are more like something that looks like a text only web-browser, but you have no idea if it's producing genius or gibberish. "Just ignore the mistakes, if you can be bothered to check if they're there" is quite the marketing pitch.
The biggest development in tech has been the change in culture - from utopian libertarian "Give everyone a bicycle for the mind and watch the joy" to the corporate cynicism of "Collect as much personal information as you can get away with, and use it to modify behaviour, beliefs, and especially spending and voting, to maximise corporate profits and extreme wealth."
While the technology has developed, the values have run headlong in the opposite direction.
It's questionable if a culture with these values is even capable of creating a singularity without destroying itself first.
sjsdaiuasgdia|7 months ago
You are almost certainly underestimating yourself and overestimating LLMs.