top | item 33720215

(no title)

bumholio | 3 years ago

Expecting Galactica to produce truthful academic papers is just about as sensible as expecting to find a significant other among your kitchen appliances. Language models are a way to emulate the writing style of human produced content, only a fool would expect them to reason about the text to any level resembling a scientific standard.

Written language is a doorway to the full extent of human cognition; unless the problem domain is severely constrained (ie "What is the distance to Mars?"), you are very likely to fall into reflexive traps that rapidly devolve into AGI ("I think, therefore I am?").

discuss

order

themoonisachees|3 years ago

The issue isn't that the model isn't truthful, it's that it is effective at writing language that appears factual and looks truthful to the untrained eye. Sure, it is going to give you what you're asking for, but the issue come when you take that and give it without warnings as to its origins to people who can't be expected to fact-check a scientific article.

hack-ernews|3 years ago

You don't need AI to fool people who can't understand a scientific article yet will trust its conclusions.