Would be curious to hear an elaboration on this perspective. In your opinion, on which measures of intelligence would GPT-4 fail to out-perform a human with an IQ of 80? Conversely, on which measures do you imagine it would succeed at doing so? Are the latter less significant or valid than the former?
chimprich|2 years ago
GPT4 will produce stuff, but only if prodded to do so by a human.
I recently asked it to help me write some code for a Garmin smartwatch. The language used for this is MonkeyC, of which there isn't a huge amount of examples on the internet.
It confidently provided me with code, but it was terrible. There were gaps with comments suggesting what it should do, bugs, function calls that didn't exist, and many other problems.
I pointed out the issues and GPT4 kept apologising and trying new stuff, but without any improvement. There wasn't any intelligence there; the model had just intuited what a program might look like from sparse data, and then kept doing the same thing. It didn't know what it was doing; it just took directions from me. It couldn't suggest ideas when it couldn't map to a concept in memory.
A human with an IQ of 80 would know if they didn't know how to code in MonkeyC. If they thought they did, they'd soon adjust their behaviour when they realised they couldn't. They'd know where the limit of their knowledge was. They wouldn't keep trying to guess what functions were available. If they didn't have any examples in memory of what the functions might be like, they might come up with novel workarounds, or they'd appreciate what program I was trying to write and suggest a different approach.
Presumably we'll make progress on this at some point, but I think it'll take new breakthroughs, not just throwing more parameters at existing models.
hammyhavoc|2 years ago
reso|2 years ago
olddustytrail|2 years ago
pclmulqdq|2 years ago
verbify|2 years ago
"Here is a random string of 32 characters:
a8Jk5pYr0Dm9Nc1Vz8Qf2Bt6Hg3Lw4Uo"
ux-app|2 years ago
does a 4 year old have intelligence?