top | item 45145388

(no title)

greensoap | 5 months ago

Anthropic literally did exactly this to train its models according to the lawsuit. The lawsuit found that Anthropic didn't even use the pirated books to train its model. So there is that

discuss

order

hcs|5 months ago

The lawsuit didn't find anything, Anthropic claimed this as part of the settlement. Companies settle without admission of wrongdoing all the time, to the extent that it can be bargained for.

ijk|5 months ago

The judge's ruling from earlier certainly seemed to me to suggest that the training was fair use.

Obviously, that's not part of the current settlement. I'm no expert on this, so I don't know the extent to which the earlier ruling applies.

freejazz|5 months ago

They stated it in court in their papers for summary judgment on the issue of fair use. My gosh! To pretend like you know what you're talking about but missing that detail?

phillipcarter|5 months ago

I'm "team Anthropic" if we're stack ranking the major American labs pumping out SOTA models by ethics or whatever, but there is no universe in which a company like them operating in this competitive environment didn't pirate the books.

Finbel|5 months ago

"ethics or whatever" seem like a good tagline for people rooting for an AI-company when it's being sued by authors.

IshKebab|5 months ago

Except for Google at least.