top | item 47058630

(no title)

rzmmm | 12 days ago

It looks like typical "memorization" in image generation models. The author likely just prompted the image.

The model makers attempt to add guardrails to prevent this but it's not perfect. It seems a lot of large AI models basically just copy the training data and add slight modifications

discuss

order

pjc50|11 days ago

Remember, mass copyright infringement is prosecuted if you're Aaron Schwartz but legal if you're an AI megacorp.

coldpie|11 days ago

> It seems a lot of large AI models basically just copy the training data and add slight modifications

Copyright laundering is the fundamental purpose of LLMs, yes. It's why all the big companies are pushing it so much: they can finally freely ignore copyright law by laundering it through an AI.

jimmaswell|11 days ago

> It seems a lot of large AI models basically just copy the training data and add slight modifications

This happens even to human artists who aren't trying to plagiarize - for example, guitarists often come up with a riff that turns out to be very close to one they heard years ago, even if it feels original to them in the moment.