top | item 41560577

(no title)

minot | 1 year ago

> "the last 5% is an open research problem".

That is the biggest hurdle, in my opinion. If we could even reply with, "sorry, I don't know about that", it would be such an improvement over what we have today. Sadly, from what I understand, the only way to say "sorry, I don't know about that" is to just say that to every single question?

discuss

order

Filligree|1 year ago

There's no specific reason why LLMs couldn't be trained to say "Don't know" when they don't know. Indeed, some close examination shows separate calculation patterns when it's telling the truth, when it's making a mistake and when it's deliberately bullshitting, with the latter being painfully common.

The problem is we don't train them that way. They're trained on what data is on the internet, and people... people really aren't good at saying "I don't know".

Applying RLHF on top of that at least helps reduce the deliberate lies, but it isn't normal to give a thumbs-up to an "I don't know" response either.

...

Of course, all this stuff does seem fixable.

crazygringo|1 year ago

> There's no specific reason why LLMs couldn't be trained to say "Don't know" when they don't know.

Yes there is, it's that we don't know how. We don't have anywhere close to the level of understanding to know when an LLM knows something and when it doesn't.

Training on material that includes "I don't know" will not work. That's not the solution.

If we knew how, we'd be doing it, since that's the #1 user complaint, and the company that fixed it would win.

mikepurvis|1 year ago

Do you think it's really a training set problem? I don't think you learn to say that you don't understand by observing people say it, you learn to say it by being introspective about how much you have actually comprehended, understanding when your thinking is going in multiple conflicting directions and you don't know which is correct, etc.

Kids learn to express confusion and uncertainty in an environment where their parents are always very confident of everything.

Overall though, I agree that this is the biggest issue right now in the AI space; instead of being able to cut itself off, the system just rambles and hallucinates and makes stuff up out of whole cloth.