Humans learn to produce good art largely by learning from existing art. Copying, mimicking, or just generally taking inspiration. We're going to need to get over this hang up. Status quo bias is dumb. Especially among people who are otherwise fond of piracy or anti-ip legislation.
Sounds to me like you've decided that AI veganism is not for you!
I've thought about this comparison to human artists taking inspiration from each other a bit. The problem becomes the scale. A human artist can't look at 5bn images and use them to be able to instantly mimic the style of another artist, hundreds of times an hour.
Consider facial recognition technology. At an individual level, empowering people to quickly see all of the photos they have taken of their spouse through running facial recognition against their photos is useful and harmless. Allowing governments to run the exact same technology against every photo uploaded to Facebook is a massively harmful expansion of the surveillance state.
I don't think training models against 5bn unlicensed images is in the same scale of harm as running facial recognition against an entire country's worth of people. But this example does show that things that are fine on a small scale can be harmful at a big scale.
This blindly steam rolls over some important distinctions. Humans aren’t allowed to sell direct copies of other people’s art (not in the US and many other countries anyway). This isn’t a hang-up, and it’s not status quo bias, it’s well covered law and economic philosophy that is already blending new digital rights ideas into hard fought legal precedents. We have explicitly decided to have the social goal to protect the rights of artistic creative people & businesses without having their work instantly ripped off.
Humans also are good at taking inspiration from ideas, where today’s AI is borrowing pixels. The AI is copying in a way that humans don’t, it’s not mimicking and taking inspiration - that is anthropomorphising the software that is trained and programmed to make automated mashups of images.
So in part it depends on what we do with AI images. There may be nothing wrong with training on copyrighted material if the resulting inferences are never distributed publicly nor used for commercial purposes. Of course that seems extremely unlikely, which is why it needs to be discussed carefully and debated in good faith, right?
Maybe in part it also depends on whether the AI software is guaranteed to produce something very different from any individual training image. If the outputs are guaranteed to be always a mashup, and never indistinguishable from any single input, that seems like it would be an easier pill to swallow. (There appears to be legal precedent along these lines for music sampling.)
1) Humans learn from existing art, but they mainly draw on their experience and perception of the physical world when creating new art. AI doesn't have access to the real world, so it's art is 100% based on existing work, not just inspired by it.
2) We still don't know exactly how much the models memorise. I'm sure you'll agree that something like Google image search retrieval doesn't qualify as original art, and copyright is still an issue. If you photoshop two images together, you probably also still have to give credit to the original images. We have to draw the line somewhere on the scale from "100% derivative to 100% original". It's not yet clear where AI image generation falls on this.
> Humans learn to produce good art largely by learning from existing art
I think you’re wrong on this. Humans learn (anything) from teachers. Yes, you can learn much on your own, but the idea of a solo artist learning from books is exceedingly rare.
So in this case, you learn art by doing, critiquing your work and the works of others. You put forth effort.
While I am amazed at the generated images and think they truly are amazing, I can’t help but think they all feel a little cheap. Like someone took a shortcut that was never meant to be found.
I do think there are real ethical issues behind the training data for both image and code generation. Nothing that can’t be solved, but random images scraped from the web are not meant for training. First - it’s not necessarily a fair-use issue. And second, garbage in, garbage out. I don’t want my auto generated images to come with a Getty watermark.
Where I do think there is hope is for the use of these as tools for artists. Where there can still be a human behind the choices and curation, but using the algorithms as a means rather than an end.
But that is just art. Not great art.
Great art comes from stepping away from your peer group after you mastered it, being able to incooperate "unrelated" or "impossible" other concepts into the art. Its a subconscious process, of recombination and filtering.
And only some can boldly go, to were no person has gone before.
Which makes this the ultimate training goal for AI. Not AGI, but a synthesis AI, capable to produce "breakthrough" candidates for the field it is trained upon, by allowing noise and filtering for the criteria of great break throughs- explanation power, beauty, higher consistency, that puzzle piece fitting all gaps moment. If there is ever a creature out there, doing that, silicon or otherwise, humanity will own its continued existence to its existence.
To a certain extent I agree, but another argument would be that at the end of the day, computers are deterministic (even on the incredibly large scale of these image generators, given parameters x, it will produce y).
We're still not sure whether humans are deterministic or not. So you can't really equate the human process of art creation to a computers. Humans may still be pulling from some external inspiration that will never be within the reach of computers (I like to think that's the case).
Yes.. but a human can understand what I want and anticipate various outcomes from much shorter conversations that don't involve all this ridiculous prompting. Asking this system for a simple picture like "a dog smoking a cigarette" reveals just how limited this system is.
Thinking that this system, with such a small set or data and no natural language processing, is going to replace artists anytime soon, is, I think, incredibly eager to the point of foolishness.
That's fine, until an AI regurgitates a unique page of code I wrote without modification. That's just copying. Although, if the courts want to clarify that straight-up copying a page or two of code is okay, I would be happy.
My fear is copilot will be allowed to copy code, but I won't be.
I wonder what would happen if an artist got their stuff legally removed from the sources, and then you asked the AI to produce something in the style of their main influences.
Sure but the output from the AI does not give credit. So I don't know what the inspiration is or where it came from or how to find more of the same kind.
is this in the same vein as professional Go and Chess players having and existential crisis over AlphaGo/AlphaZero/MuZero attaining superhuman playing ability with zero supervised learning in something like < 72 hours on non-supercomputer hardware.
i suppose we'd all feel uneasy when AI eventually becomes better than humans at creative tasks which pay our bills or differentiate us within a profession.
> I know many vegans. They have access to the same information as I do about the treatment of animals, and they have made informed decisions about their lifestyle, which I fully respect.
> I myself remain a meat-eater.
It strikes me as off that one would consider themselves informed enough to have decided not to be a vegan but consider image generation AI unethical. As a former carnivore (literally sometimes going an entire day eating mostly or entirely meat), access to the information about just how horrific factory farming is and the willingness to open my eyes to it was the only thing that stopped me from being persuaded.
That's my point. Even though I understand how unethical it is to eat meat, I continue to chose to eat it. I am not proud to remain a meat eater!
I only eat meat once or twice a week, and I try to consider the sources, but despite understanding the moral implications of doing so I have not gone vegan or vegetarian.
To my mind, this is similar to a situation in which I determine that using AI trained on unlicensed images is unethical but continue to chose to use those AIs.
Yeah, I found the taking issue with copyright as the primary reason for being averse to AI to be almost breathtakingly hilarious (I at least exhaled through my nostrils once I read it). This quibble to me seems like the most anemic criticism one could muster around the ethical considerations implied by this new generation of AI.
Something as fundamental as eating -- a process you do multiple times a day to keep yourself alive, and compromising on is an incredibly large convenience/monetary/lifestyle tradeoff -- versus using AI models for generating images from text prompts are very different.
I don't think this is universally unethical, and even if one does find this unethical, it seems low on the list of unethical things to be worrying about. Even in the context of exploiting people's labor.
Furthermore, I think a lot of art generated by actual intelligence is made by those consuming tons of copyrighted material and putting a twist on things. Is it unethical to listen to The Monkees, since, to put it in the terms of the article, they were so clearly trained on the Beatles with a few tweaks here or there?
People have found inspiration in other works since we've been creating art.
In music if you use a snipet of someone else's recording you have to pay them. It wasn't always clear that would be the case, but thats where they landed. (You end up with Led Zeppelin and Beatles Samples in some early rap).
But in visual art its a little different. Borrowing is more common and remixing is kinda allowed. When does it become "transformative?" (I always think of the "Hope" poster lawsuit, where the borrower sued the photographer as a strange one. )
https://www.law.columbia.edu/news/archive/obama-hope-poster-...
But what is the AI doing? I don't think its taking the input given and being inspired... Its kinda just sampling, in a way that makes it seem like its being original. Or is the unique training set/ annotations that are is giving the AI its unique output the art, in which case its more original.
At some point some AI is going to spit something out too close to something else and the courts will probably have to decide.
Yeah, vegetarianism is on the list (along with religion, politics, and vi/emacs) of subjects that can completely derail a discussion. Another analogy might have created less distraction. I would have avoided this one personally.
> Stable Diffusion has been trained on millions of copyrighted images scraped from the web.
How is it different from how human artists train on copyrighted images?
We have no trouble to award them copyright on art which consists of elements, or is heavily inspired by elements of copyrighted works they've seen during their education?
Human imagination can't create anything really novel. Everything you create is just cutting, stitching and deforming what you already seen in semi-random ways until you get something interesting to somebody.
> Human imagination can't create anything really novel. Everything you create is just cutting, stitching and deforming what you already seen in semi-random ways until you get something interesting to somebody.
That's an awfully low opinion of art and humanity.
This is ridiculous. It's not just AIs models that built their abstract conceptions on copyrighted material, but humans too. When a human artist paints a futuristic dome, they are also subconsciously accessing millions of copyright images they've seen throughout their lives and using them "without consent". To be consistent the author would need to avert their eyes and never look at copyrighted imagery.
Also the comparison to veganism and animal suffering is off putting.
There is so much doomsay around these image generation AIs and I don't really understand it. Did photographs devalue painters? Did digital art devalue painters? Did movies devalue theater? Did YouTube videos devalue cinematographers? Did Twitch streams or TikTok videos devalue YouTubers?
Technology has continuously brought us easier and more immediate ways to create art, inspiring new generations of artists who hone their skills with the new tech. Meanwhile, older forms of art continue to be valued along side the new stuff.
I'm also having a hard time seeing the ethical crisis with these AIs being trained on copyrighted material. Styles are not (or at least should not be) copyrightable. An artist can be inspired by the works of another and go on to create something new. Many forms of derivative work are even specifically granted safety under existing copyright law.
Besides, it would be practically impossible to prove that a model was trained on copyrighted works. Even if we decided it was unethical, any law against it would be theoretical and practically unenforceable. Either way, artists will have to adapt.
I don't think the situation is near as dire as so many seem to believe. An AI can only reproduce a style that has been thoroughly explored by the content which it is trained on. New styles will continue to be rewarded. Digital artists will be encouraged to push boundaries. And for the time being, the AIs still have some pretty severe limitations so artists will be able to capitalize on those.
And one more thing I never see brought up when talking about these AI image generators: there's already precedent for how this will play out. AI music composers have been around for many years now, but Dua Lipa and The Weeknd appear to be doing just fine. Even the more classical composers and orchestras seem to be going just as strong as ever. If AI artists show no sign of toppling the music industry, why should we expect the fate of digital images to be so different?
>There is so much doomsay around these image generation AIs and I don't really understand it. Did photographs devalue painters? Did digital art devalue painters? Did movies devalue theater? Did YouTube videos devalue cinematographers? Did Twitch streams or TikTok videos devalue YouTubers?
All of those are tools which did not lead to potentially the same final product produced for a miniature fraction of the labor costs.
AI composition is generally pretty shit. If (more like when) it becomes better, we will be having the exact same argument regarding composers.
So soft-PSA: the following is more than a little misleading:
“The fact that it can compress such an enormous quantity of visual information into such a small space is itself a fascinating detail.”
This is not a detail: it’s the principle mechanism. The ability to compress something is conferred by the identification and exploitation of structure, conversely the scarcity or absence of structure inhibits or prohibits compression. You can eyeball check an RNG with compression techniques.
This has counter-intuitive consequences that you can test on your laptop! Even using off-the-shelf codecs it only takes a modest corpus to see that pop music compresses better than eclectic jazz, which compresses better than white noise. The same thing holds for headshots of people: a big pile of headshots drawn from a reasonably broad corpus of humans will enjoy a noticeably lower compression ratio than a subset selected by any plausible “conventional attractiveness” filter. “Conventional attractiveness” (defined any common-sense way) correlates sharply with bilateral symmetry, with obvious implications for storage space.
Information theory is the thread that ties together all this AI craze stuff!
For me, this is easy because I consider copyright amoral — that is to say that copyright does not involve morality or immortality, and only its formal legal definition is meaningful.
I am not saying that attribution doesn’t have an ethical dimension. Only that attribution is morally/ethically independent of the legal construct of copyright.
Not that I think it matters much because I don’t expect AI art will have much impact as art in the long run…which is not the same as saying it won’t have much impact in the realm of disposable images.
The reason I think AI art won’t be terribly important is that there do not appear to be very many intellectually interesting things to say about it as art beyond is-it-art-because-I-hung-it-on-the-wall?
And none of this is to say that artists won’t make interesting AI art. Because artists and their processes are intellectually interesting in ways that training sets are not. I mean David Hockney’s iPad art is interesting because of David Hockney not because of iPads.
I'm not an AI Vegan, but perhaps an advocate of the AI "Eat Local" movement (of which I might be the only member :P)?
I similarly understand and sympathise with the apprehension of generating images from this massive harvesting of data with little regard of what should and should not be in the dataset.
I think by using these models as pre-training weights and fine-tuning on data which one believes they have the right to use or (even better) has created themselves, you can (IMHO) greatly minimise the harm of your model's output.
I also like this from a conceptual stand-point. We have the right to learn and be inspired by others, but when it comes to putting the paint brush to the canvas or the ink to the page, it should be our own experiences that we draw primarily from.
Artists look at images (copyright or not) while working on their own images all the time. Many of Shakespeare's works are re-writes of stories that were common in his era? How is this any different?
Another example to consider as an intuition pump might be to ask how many engineers are going to publish their FOSS with a “using this as AI training input is forbidden” clause, or otherwise battle against training Copilot and other code authoring systems.
Art is typically quite style-promiscuous, and mostly protected by copyright. It’s also quite analog/continuous. Code is more discrete; you can diff it and easily see if an AI copied a particular block that doesn’t appear in other projects.
Personally I don’t think there is a strong argument that training an AI on art is problematic, as long as the AI doesn’t then commit copyright infringement. I think this process is equivalent to art students consuming art and synthesizing their own styles.
I think it’s more problematic if AI starts regurgitating discrete chunks of FOSS code, since that’s probably a license violation.
For digital artists, lots of skills are going to become irrelevant; the raw skill of pixel pushing and line work may well become obsolete. But the hardest part was always having good taste, composition, ideation. These are not going away any time soon.
And if you enjoy pushing pixels around, nothing to stop you doing that; we still play chess for fun even though computers dominate humans at the task. Indeed, computers make very scalable and accessible coaches; one could argue that chess is more fun, and easier to learn, now that computers have superseded humans. Perhaps the same will be true about art, and later, coding.
Just like those art students you reference, engineers regularly find bits of code and regurgitate it in their projects. Stack Overflow gets its reputation for a reason. They especially do this earlier in their careers, before they develop their own intuition and style, just like an art student.
Why would you treat the ML model (I very much hesitate to call it AI) differently in these two cases?
> I think it’s more problematic if AI starts regurgitating discrete chunks of FOSS code, since that’s probably a license violation.
What if the code is 99% FOSS code, but with variable names changed say?
I'm curious because for humans I think that would still count as copyright infringement, but Github seems to only filter out 100% code matches.
I suppose copyright law is not precise enough that you could algorithmically decide of two pieces of code are too close? The only way to be sure seems to be "clean room design", where the AI hasn't seen the FOSS code in the first place?
I will not on the simple basis that this tool will empower me to create things I never could have imagined and essentially have a tool that transforms imagination into images in the same way that my own imagination does.
It will take very very large ethical issues, direct death or suffering, for me to reconsider. It's just such an amazing opportunity that I am not willing to just give it away for nothing.
As the author of "Digital Vegan" [1] and "Ethics For Hackers" there's
much about this thread that pleases me. Some very intelligent
discussion is emerging.
I feel both that the time has come for fundamental cultural change
within the tech/hacker community, and that expressions like "AI
Veganism" are indeed appropriate and useful on many levels.
They enable the discussion of lifestyle choice, rights, health,
ecology and many other salient factors in relation to digital
technology.
At this point, those still clinging to a rejection of modern
tech-critique as in any sense anti-progressive or "Luddite" are in
fact the ones woefully out of date and out of step with how the world
is turning.
Amazing from-the-future level technology. Interesting topic. Very distracting analogy.
Might be a textbook example of technological progress: Lots of people have their jobs eliminated (which suuucks), but many more benefit from having access to professional-level custom artwork that they didn't have before.
AI/ML is coming for everything. High-paying jobs like doctors, lawyers, and developers are definitely not immune.
For those opining that training a model isn't dissimilar to how humans acquire knowledge: human artists take decades to do it; and eventually, human artists die.
These aren't constraints on AI developed, controlled, and accessible only to enormous corporate entities; even if they deign to provide limited access to the public at times.
[+] [-] spywaregorilla|3 years ago|reply
[+] [-] simonw|3 years ago|reply
I've thought about this comparison to human artists taking inspiration from each other a bit. The problem becomes the scale. A human artist can't look at 5bn images and use them to be able to instantly mimic the style of another artist, hundreds of times an hour.
Consider facial recognition technology. At an individual level, empowering people to quickly see all of the photos they have taken of their spouse through running facial recognition against their photos is useful and harmless. Allowing governments to run the exact same technology against every photo uploaded to Facebook is a massively harmful expansion of the surveillance state.
I don't think training models against 5bn unlicensed images is in the same scale of harm as running facial recognition against an entire country's worth of people. But this example does show that things that are fine on a small scale can be harmful at a big scale.
[+] [-] dahart|3 years ago|reply
Humans also are good at taking inspiration from ideas, where today’s AI is borrowing pixels. The AI is copying in a way that humans don’t, it’s not mimicking and taking inspiration - that is anthropomorphising the software that is trained and programmed to make automated mashups of images.
So in part it depends on what we do with AI images. There may be nothing wrong with training on copyrighted material if the resulting inferences are never distributed publicly nor used for commercial purposes. Of course that seems extremely unlikely, which is why it needs to be discussed carefully and debated in good faith, right?
Maybe in part it also depends on whether the AI software is guaranteed to produce something very different from any individual training image. If the outputs are guaranteed to be always a mashup, and never indistinguishable from any single input, that seems like it would be an easier pill to swallow. (There appears to be legal precedent along these lines for music sampling.)
[+] [-] thomasahle|3 years ago|reply
2) We still don't know exactly how much the models memorise. I'm sure you'll agree that something like Google image search retrieval doesn't qualify as original art, and copyright is still an issue. If you photoshop two images together, you probably also still have to give credit to the original images. We have to draw the line somewhere on the scale from "100% derivative to 100% original". It's not yet clear where AI image generation falls on this.
[+] [-] mbreese|3 years ago|reply
I think you’re wrong on this. Humans learn (anything) from teachers. Yes, you can learn much on your own, but the idea of a solo artist learning from books is exceedingly rare.
So in this case, you learn art by doing, critiquing your work and the works of others. You put forth effort.
While I am amazed at the generated images and think they truly are amazing, I can’t help but think they all feel a little cheap. Like someone took a shortcut that was never meant to be found.
I do think there are real ethical issues behind the training data for both image and code generation. Nothing that can’t be solved, but random images scraped from the web are not meant for training. First - it’s not necessarily a fair-use issue. And second, garbage in, garbage out. I don’t want my auto generated images to come with a Getty watermark.
Where I do think there is hope is for the use of these as tools for artists. Where there can still be a human behind the choices and curation, but using the algorithms as a means rather than an end.
[+] [-] throwaways85989|3 years ago|reply
And only some can boldly go, to were no person has gone before.
Which makes this the ultimate training goal for AI. Not AGI, but a synthesis AI, capable to produce "breakthrough" candidates for the field it is trained upon, by allowing noise and filtering for the criteria of great break throughs- explanation power, beauty, higher consistency, that puzzle piece fitting all gaps moment. If there is ever a creature out there, doing that, silicon or otherwise, humanity will own its continued existence to its existence.
[+] [-] bluehorseray|3 years ago|reply
We're still not sure whether humans are deterministic or not. So you can't really equate the human process of art creation to a computers. Humans may still be pulling from some external inspiration that will never be within the reach of computers (I like to think that's the case).
[+] [-] akira2501|3 years ago|reply
Thinking that this system, with such a small set or data and no natural language processing, is going to replace artists anytime soon, is, I think, incredibly eager to the point of foolishness.
[+] [-] Buttons840|3 years ago|reply
My fear is copilot will be allowed to copy code, but I won't be.
[+] [-] ZeroGravitas|3 years ago|reply
[+] [-] t0bia_s|3 years ago|reply
Art is not about form. It's about content. Meaning that can be generated and understand only by humans.
[+] [-] sp332|3 years ago|reply
[+] [-] rasz|3 years ago|reply
[+] [-] leeoniya|3 years ago|reply
i suppose we'd all feel uneasy when AI eventually becomes better than humans at creative tasks which pay our bills or differentiate us within a profession.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] d23|3 years ago|reply
> I myself remain a meat-eater.
It strikes me as off that one would consider themselves informed enough to have decided not to be a vegan but consider image generation AI unethical. As a former carnivore (literally sometimes going an entire day eating mostly or entirely meat), access to the information about just how horrific factory farming is and the willingness to open my eyes to it was the only thing that stopped me from being persuaded.
[+] [-] simonw|3 years ago|reply
I only eat meat once or twice a week, and I try to consider the sources, but despite understanding the moral implications of doing so I have not gone vegan or vegetarian.
To my mind, this is similar to a situation in which I determine that using AI trained on unlicensed images is unethical but continue to chose to use those AIs.
[+] [-] RosanaAnaDana|3 years ago|reply
[+] [-] hbn|3 years ago|reply
[+] [-] dfxm12|3 years ago|reply
Furthermore, I think a lot of art generated by actual intelligence is made by those consuming tons of copyrighted material and putting a twist on things. Is it unethical to listen to The Monkees, since, to put it in the terms of the article, they were so clearly trained on the Beatles with a few tweaks here or there?
[+] [-] acomjean|3 years ago|reply
In music if you use a snipet of someone else's recording you have to pay them. It wasn't always clear that would be the case, but thats where they landed. (You end up with Led Zeppelin and Beatles Samples in some early rap).
Even "Sound alike " singers get litigated: https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.
But in visual art its a little different. Borrowing is more common and remixing is kinda allowed. When does it become "transformative?" (I always think of the "Hope" poster lawsuit, where the borrower sued the photographer as a strange one. ) https://www.law.columbia.edu/news/archive/obama-hope-poster-...
But what is the AI doing? I don't think its taking the input given and being inspired... Its kinda just sampling, in a way that makes it seem like its being original. Or is the unique training set/ annotations that are is giving the AI its unique output the art, in which case its more original.
At some point some AI is going to spit something out too close to something else and the courts will probably have to decide.
[+] [-] AndyNemmity|3 years ago|reply
[+] [-] adrianmonk|3 years ago|reply
[+] [-] brodouevencode|3 years ago|reply
[+] [-] scotty79|3 years ago|reply
How is it different from how human artists train on copyrighted images?
We have no trouble to award them copyright on art which consists of elements, or is heavily inspired by elements of copyrighted works they've seen during their education?
Human imagination can't create anything really novel. Everything you create is just cutting, stitching and deforming what you already seen in semi-random ways until you get something interesting to somebody.
Try imagining how an alien might look.
[+] [-] a_shovel|3 years ago|reply
That's an awfully low opinion of art and humanity.
[+] [-] simonw|3 years ago|reply
[+] [-] lm28469|3 years ago|reply
The part where the company training the models makes $$$ on copyrighted work can definitely be a debate point.
[+] [-] colordrops|3 years ago|reply
Also the comparison to veganism and animal suffering is off putting.
[+] [-] CivBase|3 years ago|reply
Technology has continuously brought us easier and more immediate ways to create art, inspiring new generations of artists who hone their skills with the new tech. Meanwhile, older forms of art continue to be valued along side the new stuff.
I'm also having a hard time seeing the ethical crisis with these AIs being trained on copyrighted material. Styles are not (or at least should not be) copyrightable. An artist can be inspired by the works of another and go on to create something new. Many forms of derivative work are even specifically granted safety under existing copyright law.
Besides, it would be practically impossible to prove that a model was trained on copyrighted works. Even if we decided it was unethical, any law against it would be theoretical and practically unenforceable. Either way, artists will have to adapt.
I don't think the situation is near as dire as so many seem to believe. An AI can only reproduce a style that has been thoroughly explored by the content which it is trained on. New styles will continue to be rewarded. Digital artists will be encouraged to push boundaries. And for the time being, the AIs still have some pretty severe limitations so artists will be able to capitalize on those.
And one more thing I never see brought up when talking about these AI image generators: there's already precedent for how this will play out. AI music composers have been around for many years now, but Dua Lipa and The Weeknd appear to be doing just fine. Even the more classical composers and orchestras seem to be going just as strong as ever. If AI artists show no sign of toppling the music industry, why should we expect the fate of digital images to be so different?
[+] [-] trention|3 years ago|reply
All of those are tools which did not lead to potentially the same final product produced for a miniature fraction of the labor costs.
AI composition is generally pretty shit. If (more like when) it becomes better, we will be having the exact same argument regarding composers.
[+] [-] benreesman|3 years ago|reply
“The fact that it can compress such an enormous quantity of visual information into such a small space is itself a fascinating detail.”
This is not a detail: it’s the principle mechanism. The ability to compress something is conferred by the identification and exploitation of structure, conversely the scarcity or absence of structure inhibits or prohibits compression. You can eyeball check an RNG with compression techniques.
This has counter-intuitive consequences that you can test on your laptop! Even using off-the-shelf codecs it only takes a modest corpus to see that pop music compresses better than eclectic jazz, which compresses better than white noise. The same thing holds for headshots of people: a big pile of headshots drawn from a reasonably broad corpus of humans will enjoy a noticeably lower compression ratio than a subset selected by any plausible “conventional attractiveness” filter. “Conventional attractiveness” (defined any common-sense way) correlates sharply with bilateral symmetry, with obvious implications for storage space.
Information theory is the thread that ties together all this AI craze stuff!
[+] [-] brudgers|3 years ago|reply
I am not saying that attribution doesn’t have an ethical dimension. Only that attribution is morally/ethically independent of the legal construct of copyright.
Not that I think it matters much because I don’t expect AI art will have much impact as art in the long run…which is not the same as saying it won’t have much impact in the realm of disposable images.
The reason I think AI art won’t be terribly important is that there do not appear to be very many intellectually interesting things to say about it as art beyond is-it-art-because-I-hung-it-on-the-wall?
And none of this is to say that artists won’t make interesting AI art. Because artists and their processes are intellectually interesting in ways that training sets are not. I mean David Hockney’s iPad art is interesting because of David Hockney not because of iPads.
YMMV.
[+] [-] jszymborski|3 years ago|reply
I similarly understand and sympathise with the apprehension of generating images from this massive harvesting of data with little regard of what should and should not be in the dataset.
I think by using these models as pre-training weights and fine-tuning on data which one believes they have the right to use or (even better) has created themselves, you can (IMHO) greatly minimise the harm of your model's output.
I also like this from a conceptual stand-point. We have the right to learn and be inspired by others, but when it comes to putting the paint brush to the canvas or the ink to the page, it should be our own experiences that we draw primarily from.
[+] [-] simonw|3 years ago|reply
I'd love to be able to run GitHub Copilot weighted heavily towards code that I've written myself, as an additional training layer.
[+] [-] orblivion|3 years ago|reply
[+] [-] prometheus76|3 years ago|reply
[+] [-] theptip|3 years ago|reply
Art is typically quite style-promiscuous, and mostly protected by copyright. It’s also quite analog/continuous. Code is more discrete; you can diff it and easily see if an AI copied a particular block that doesn’t appear in other projects.
Personally I don’t think there is a strong argument that training an AI on art is problematic, as long as the AI doesn’t then commit copyright infringement. I think this process is equivalent to art students consuming art and synthesizing their own styles.
I think it’s more problematic if AI starts regurgitating discrete chunks of FOSS code, since that’s probably a license violation.
For digital artists, lots of skills are going to become irrelevant; the raw skill of pixel pushing and line work may well become obsolete. But the hardest part was always having good taste, composition, ideation. These are not going away any time soon.
And if you enjoy pushing pixels around, nothing to stop you doing that; we still play chess for fun even though computers dominate humans at the task. Indeed, computers make very scalable and accessible coaches; one could argue that chess is more fun, and easier to learn, now that computers have superseded humans. Perhaps the same will be true about art, and later, coding.
[+] [-] sulam|3 years ago|reply
Why would you treat the ML model (I very much hesitate to call it AI) differently in these two cases?
[+] [-] thomasahle|3 years ago|reply
What if the code is 99% FOSS code, but with variable names changed say? I'm curious because for humans I think that would still count as copyright infringement, but Github seems to only filter out 100% code matches.
I suppose copyright law is not precise enough that you could algorithmically decide of two pieces of code are too close? The only way to be sure seems to be "clean room design", where the AI hasn't seen the FOSS code in the first place?
[+] [-] bioemerl|3 years ago|reply
It will take very very large ethical issues, direct death or suffering, for me to reconsider. It's just such an amazing opportunity that I am not willing to just give it away for nothing.
[+] [-] KingOfCoders|3 years ago|reply
Artists have been trained on hundreds of copyrighted images in magazines, museums and on the web.
[+] [-] nonrandomstring|3 years ago|reply
I feel both that the time has come for fundamental cultural change within the tech/hacker community, and that expressions like "AI Veganism" are indeed appropriate and useful on many levels.
They enable the discussion of lifestyle choice, rights, health, ecology and many other salient factors in relation to digital technology.
At this point, those still clinging to a rejection of modern tech-critique as in any sense anti-progressive or "Luddite" are in fact the ones woefully out of date and out of step with how the world is turning.
[1] https://digitalvegan.net
[+] [-] waffletower|3 years ago|reply
[+] [-] pdntspa|3 years ago|reply
[+] [-] xnx|3 years ago|reply
Might be a textbook example of technological progress: Lots of people have their jobs eliminated (which suuucks), but many more benefit from having access to professional-level custom artwork that they didn't have before.
AI/ML is coming for everything. High-paying jobs like doctors, lawyers, and developers are definitely not immune.
[+] [-] natch|3 years ago|reply
[+] [-] dleslie|3 years ago|reply
These aren't constraints on AI developed, controlled, and accessible only to enormous corporate entities; even if they deign to provide limited access to the public at times.