No. I can outperform any state-of-the-art model as long as I'm given the same liberty researchers have, to choose which categories I'm judged in and how I'm judged. If the situation is particularly dire, I can also do a little dance to convince a human judge that I am sentient - GPT-4 cannot, to my knowledge.
GPT 3.5 was certainly not, even though it knew a great many facts, it was like a 12 year old child with a search engine.
GPT-4 feels like an adult of average intelligence, again with a search engine. But it’s fast and it never gets tired or cranky.
I suspect the next iteration of these models will be obviously and conclusively smarter than I am, and probably most other people as well.
Since the training data is human, it stands to reason that the maximum intelligence that can be achieved by this approach is no more than say the most intelligent 1% or .1% of humans. There would need to be a large enough population of very smart folks to create a large training corpus.
I'm not sure that limit is meaningful. Suppose that there's a system with the approximate reasoning ability of a person, but it doesn't forget things and it has studied every textbook ever written. Would you say that it's only as intelligent as a person?
Smarter, sure... it can process massive amounts of information considerably faster than I can. Intelligent, I don't think so... I can pull together a new and fresh idea by observing the shadow of a bird flying across the nights sky, I can be inspired to rally a team from the beautify of the patterns in rain on my car window driving to work. LLMs are smarter than me, but I still think I'm more intelligent than them. Not really what you asked, just a thought. :)
smoldesu|3 years ago
dougmwne|3 years ago
GPT-4 feels like an adult of average intelligence, again with a search engine. But it’s fast and it never gets tired or cranky.
I suspect the next iteration of these models will be obviously and conclusively smarter than I am, and probably most other people as well.
Since the training data is human, it stands to reason that the maximum intelligence that can be achieved by this approach is no more than say the most intelligent 1% or .1% of humans. There would need to be a large enough population of very smart folks to create a large training corpus.
thfuran|3 years ago
neom|3 years ago
unknown|3 years ago
[deleted]