(no title)
TimPC
|
7 months ago
People should be worried because right now AI is on an exponential growth trajectory and no-one knows when it will level off into an s-curve. AI is starting to get close to good enough. If it becomes twice as good in seven months then what?
mattnewport|7 months ago
samsartor|7 months ago
roadside_picnic|7 months ago
Even in nature this is clear. Humans are a great example: cooked food predates homo sapiens and it is largely considered to be a pre-requisite for having human level intelligence because of the enormous energy demands of our brains. And nature has given us wildly more efficient brains in almost every possible way. The human brain runs on about 20 watts of power, my RTX uses 450 watts at full capacity.
The idea of "runaway" super intelligence has baked in some very extreme assumptions about the nature of thermodynamics and intelligence, that are largely just hand waved away.
On top of that, AI hasn't changed in a notable way for me personally in a year. The difference between 2022 and 2023 was wild, between 2023 and 2024 changed some of my workflows, 2024 to today largely is just more options around which tooling I used and how these tools can be combined, but nothing really at a fundamental level feels improved for me.
LeftHandPath|7 months ago
More recently, it seems like that's not the case. Larger models sometimes even hallucinate more [0]. I think the entire sector is suffering from a Dunning Kruger effect -- making an LLM is difficult, and they managed to get something incredible working in a much shorter timeframe than anyone really expected back in the early 2010s. But that led to overconfidence and hype, and I think there will be a much longer tail in terms of future improvements than the industry would like to admit.
Even the more advanced reasoning models will struggle to play a valid game of chess, much less win one, despite having plenty of chess games in their training data [1]. I think that, combined with the trouble of hallucinations, hints at where the limitations of the technology really are.
Hopefully LLMs will scare society into planning how to handle mass automation of thinking and logic, before a more powerful technology that can really do it arrives.
[0]: https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-m...
[1]: https://dev.to/maximsaplin/can-llms-play-chess-ive-tested-13...
esafak|7 months ago
I believe hallucinations are partly an artifact of imperfect model training, and thus can be ameliorated with better technique.
nwienert|7 months ago
GPT-1 June 2018
GPT-2 February 2019
GPT-3 November 2021
GPT-4 March 2023
Claude tells me this is the rough improvement of each:
GPT-1 to 2: 5-10x
GPT-2 to 3: 10-20x
GPT 3 to 4: 2-4x
Now it's been 2.5 years since 4.
Are you expecting 5 to be 2-4x better, or 10-20x better?
esafak|7 months ago
unknown|7 months ago
[deleted]