top | item 46753929

(no title)

m0llusk | 1 month ago

Hallucinations that have certain characteristics and boundaries are still hallucinations. This is happening because learning models are doing pattern matching, so to put it briefly anything that fits may work and end up in the output.

Being able to admit the flaws and limitations of a technology is often critical to advancing adoption. Unfortunately, producers of currently popular learning model based technologies are more interested in speculation and growth and speculative growth than genuinely robust operation. This paper is a symptom of a larger problem that is contributing to the bubble pop, downturn, or "AI winter" that we are collectively heading toward.

discuss

order

chrisjj|1 month ago

That diagnosis is supported by the author blurb:

The Lab’s goal is to ensure AI systems do not only produce fluent answers but also preserve the purpose, nuance, and integrity of language itself.