(no title)
wombat_trouble | 3 years ago
The point is that basically all Stable Diffusion / DALL-E / MidJourney output is some shade of this; the only new data is that contrary to prior assertions, in some cases, it goes all the way to a verbatim copy.
I think there are some defensible stances one can take. One is to reject the idea of intellectual property. Another is to advocate for some specific legal or technical bar that the models would have to pass for it to qualify as "not stealing". Yet another is to argue it's a morally-agnostic technology like VHS or a photocopier, and the burden of using it in a socially acceptable way rests with the user.
FeepingCreature|3 years ago
What, summarize the submission? This is straight quoted from the link.
> The point is that basically all Stable Diffusion / DALL-E / MidJourney output is some shade of this
Yes, and that point is mistaken, or so generic as to be worthless. The network memorizes art for the same reason humans memorize art: because there's some art pieces we see so often that we can recall them easily.
Ask an artist to duplicate Starry Night or Scream from memory, you'll probably get at least a passable imitation. The more capable the artist, the more faithful it will be.
We know that SD can be made to plagiarize, given repeated training on a specific image. (This is just to say that a neural network can learn to regurgitate a sample, a capability that was not ever in question.) This is a far cry from the assertions that its art is generally plagiarized.
egypturnash|3 years ago
A human artist has been trained in the ethics and laws of their craft along with the skills required to make images.
A human artist, asked to clone Starry Night, will ask you what you are doing, and knows where the lines are between "a tribute", "plagiarism", and "outright forgery".
A human artist, asked to do work in the style of another artist, will have a certain respect for the other artist's ownership of their style. This is not a thing that is at all protected by intellectual property law but it is still a thing artists are trained to respect. There are exceptions - drawing just like your boss may be your job, drawing just like a living artist for a couple pieces is a useful way to break down their style and take a few parts of it to influence later work without going over the "style swipe" line, building your own work on the obvious foundation of an influential, dead artist's style is fine - but there are lines professionals will be very reluctant to cross.
For a relatively recent example of what happens if you break these unwritten laws, check out what happened when the American cartoonist Steve Giffen started doing a wholesale style swipe from the Argentinean cartoonist Muñoz: https://en.wikipedia.org/wiki/Keith_Giffen#Controversy
Neural networks know none of these unwritten rules. Neither do the people who are training them. Feed it a bunch of work generated by a living artist and start making a profit off of that? Sure, no problem! Bonus points if your response to them getting pissed off about this is to call them a luddite who is resisting the inevitable, and should throw away a lifetime of passionate training and go get a new job.
Permit|3 years ago
This is absolutely not the point of the linked paper. It may be something you believe but you’re on the hook for providing evidence for it, this paper does not.
vanderZwan|3 years ago
Or, you know, legislation. I'm kind of sick of everything being offloaded as a responsibility of the end-user as an excuse to externalize costs.
Plus, in this case it's not even like VHS or a photocopier, it's more like the printing press or the Jacquard loom: those with capital to invest in it benefit the most, at the expense of individuals being exploited.
williamcotton|3 years ago
Tools that are a burden to use, like tools that produce too many infringing works, are not going to sell as well as those that are not a burden to use.
This means that if someone makes a tool like this that also alerts the user of likely infringement it would perform better in a corporate, risk-averse marketplace.
williamcotton|3 years ago
There is a lot of case law that supports this interpretation, Sony v Universal being the most important as it introduced the notion of “commercially significant non-infringing uses”, of which there will be many by the time this hits trial and the appeals process.
Lawyers for these companies know this and the faster that can get people building and buying tools that are clearly non-infringing the more likely that these models are seen as fair-use.
However, if they want to keep the lawyers at their customer’s firms happy they will really need to come up with a way to show users if a work is likely to infringe on an existing work.
bioemerl|3 years ago
Generating copyright images isn't a problem. Using them to make money is.
yencabulator|3 years ago