top | item 34295475

(no title)

andreshb | 3 years ago

tl:dr by GPT3

Paper contributes to debate about abilities of large language models like GPT-3

Evaluates how well GPT performs on the Turing Test

Examines limits of such models, including tendency to generate falsehoods

Considers social consequences of problems with truth-telling in these models

Proposes formalization of "reversible questions" as a probabilistic measure

Argues against claims that GPT-3 lacks semantic ability

Offers theory on limits of large language models based on compression, priming, distributional semantics, and semantic webs

Suggests that GPT and similar models prioritize plausibility over truth in order to maximize their objective function Warns that widespread adoption of language generators as writing tools could result in permanent pollution of informational ecosystem with plausible but untrue texts.

discuss

order

No comments yet.