top | item 35252964

(no title)

purple_basilisk | 2 years ago

Good point about hallucinations - low accuracy, high confidence. I wonder if AI will develop the ability to nuance its own confidence. It would be a more useful tool if it could provide a reasonable confidence level along with its output. Much like a human would say, "not sure about this, but..."

discuss

order

unknownsky|2 years ago

I'm not an AI expert so I could be wrong, but it's my understanding that there is a confidence score behind the scenes. It's just not shown in the current UI.

An automated AI system should be able to ask a human for help whenever the confidence score is below a certain threshold or even spit out a backlog of all the tasks it can't confidently handle itself.

euroderf|2 years ago

FWIW, Watson used its internal confidence score when playing Jeopardy.

worrycue|2 years ago

It needs to be able to evaluate its own output. We human do a quick sanity check most of the time before we speak - “On what do I base this assertion?” … etc.

Robotbeat|2 years ago

I wonder if multiple, independently trained LLM‘s could be used in a voting system to determine confidence, or simply call out each others’ bulls**.

ChatGTP|2 years ago

Two wrong systems won't make a right though. Especially when the wrong systems are getting move convincing at being right.