top | item 38389502

(no title)

muskmusk | 2 years ago

Friend, the creator of this new progress is a machine learning PhD with a decade of experience in pushing machine learning forward. He knows a lot of math too. Maybe there is a chance that he too can tell the difference between a meaningless advance and an important one?

discuss

order

seanhunter|2 years ago

That is as pure an example of the fallacy of argument from authority[1] as I have ever seen especially when you consider that any nuance in the supposed letter from the researchers to the board will have been lost in the translation from "sources" to the journalist to the article.

[1] https://en.wikipedia.org/wiki/Argument_from_authority

abhpro|2 years ago

That fallacy's existence alone doesn't discount anything (nor have you shown it's applicable here), otherwise we'd throw out the entire idea of authorities and we'd be in trouble

Eisenstein|2 years ago

When the person arguing uses their own authority (job, education) to give their answer relevance, then stating that the authority of another person is greater (job, education) to give that person's answer preeminence is valid.

neilk|2 years ago

I am neither a mathematician or LLM creator but I do know how to evaluate interesting tech claims.

The absolute best case scenario for a new technology is that it when it seems like a toy for nerds, and doesn't outperform anything we have today, but the scaling path is clear.

Its problems just won't matter if it does that one thing with scaling. The web is a pretty good hypermedia platform, but a disastrously bad platform for most other computer applications. Nevertheless the scaling of URIs and internet protocols have caused us to reorganize our lives around it. And then if there really are unsolvable problems with the platform they just get offloaded onto users. Passwords? Privacy? Your problem now. Surely you know to use a password manager?

I think this new wave of AI is going to be like that. If they never solve the hallucination/confabulation issue, it's just going to become your problem. If they never really gain insight, it's going to become your problem to instruct them carefully. Your peers will chide for not using a robust AI-guardrail thing or not learning the basics of prompt engineering like all the kids do instinctively these days.

wbhart|2 years ago

How on earth could you evaluate the scaling path with too little information. That's my point. You can't possibly know that a technology can solve a given kind of problem if it can only so far solve a completely different kind of problem which is largely unrelated!

Saying that performance on grade-school problems is predictive of performance on complex reasoning tasks (including theorem proving) is like saying that a new kind of mechanical engine that has 90% efficiency can be scaled 10x.

These kind of scaling claims drive investment, I get it. But to someone who understands (and is actually working on) the actual problem that needs solving, this kind of claim is perfectly transparent!

raincole|2 years ago

But he also has the incentive to exaggerate the AI's ability.

The whole idea of double-blind test (and really, the whole scientific methodology) is based on one simple thing: even the most experienced and informed professionals can be comfortably wrong.

We'll only know when we see it. Or at least when several independent research groups see it.

visarga|2 years ago

> even the most experienced and informed professionals can be comfortably wrong

That's the human hallucination problem. In science it's a very difficult issue to deal with, only in hindsight you can tell which papers from a given period were the good ones. It takes a whole scientific community to come up with the truth, and sometimes we fail.

lokar|2 years ago

I thought (and could be wrong) that all of these concerns are based on a very low probability of a very bad outcome.

So: we might be close to a breakthrough, that breakthrough could get out of hand, then it could kill a billion+ people.

aidaman|2 years ago

Unlikely. We'll know when OpenAI has declared itself ruler of the new world, imposes martial law, and takes over.

nobrains|2 years ago

Also, wbhart is referring to publicly released LLMs, while the OpenAI researchers are most likely referring to an un-released in-research LLM.

las_balas_tres|2 years ago

Sure... but that machine learning PhD has a vested interest in being optimistically biased in his observations.

smrtinsert|2 years ago

Ah finally the engineers approach to the news. I'm not sure why we have to have hot takes, instead of dissecting the news and trying to tease out the how.