top | item 39367810

(no title)

WhrRTheBaboons | 2 years ago

it makes perfect sense.

the fact that GPT correctly represents the initial paragraphs gives off the impression to users that they can rely on it for the rest of the article.

from the perspective of a journalistic publication, the fact that GPT can fool people into thinking they are reading NYT's content when it is in fact LLM hallucination has a non-negligible negative impact on their integrity.

this is separate from the copyright issue, but considering how it can be illegal to misrepresent someone's work and words when i leads to reputational damage, i fail to see how GPT can be completely off the hook for this.

Obviously that depends on country-specific slander/libel laws, which admittedly are quite lax in the US, but in general I could see this leading to problems if unaddressed by Open AI.

As an example, if I were to make a website copying parts of NYT's articles and injecting fabrications into the rest, presenting the entire thing as representation of NYT's work, the court would easily rule in NYT's favor.

Despite the default disclaimer that AI work can be inaccurate and whatnot, the fact that GPT will accurately resemble the first part of NYT's content is problematic, as there is no way for the user to know how or why the rest does not follow that rule.

discuss

order

No comments yet.