top | item 41408173

Why A.I. Isn't Going to Make Art

49 points| eludwig | 1 year ago |newyorker.com

139 comments

order

mrmetanoia|1 year ago

I love articles that tell me why something they can't know is something they like to guess at anyway. This is pointless. Art will get made. Artists will use AI to make some of it. The debate about whether or not its art will be part of the art. This article is not part of anything, it's just throwing punches at air in a tantrum.

llamaimperative|1 year ago

> The debate about whether or not its art will be part of the art

> This article is not part of anything

Seems incoherent I think?

lubujackson|1 year ago

I agree that the article is poorly argued. I think the reason why won't AI can't create art (at least in the near term) comes down to your definition of art. To me, it is a form of communication. This certainly doesn't mean AI can't produce something that people consider art, it means it is incapable of differentiating something humans will consider artistic vs. copy pasta vs. hallucination.

In the end, there is a human curator that decides which creation by the AI is art. There is a human that writes the prompt that creates it. There is human intent as part of the mix, so AI can certainly create art but it is no different than a camera that remove many of the choices people make and presents new ones - which means it is a tool for creating art at a different level of abstraction.

throwaway201606|1 year ago

I agree; can’t know what AI will do at this point so punditry like this is just meh.

… and this analysis generalizes all types of AIs based on characteristics of just one type of AI - LLMs - that select the “next best word “ (my layman’s understanding of what they do). With so many other types of AI out there, we will eventually definitely get new approaches to AI where the arguments presented won’t make sense

About throwing punches in the air in a tantrum - that was a funny comment. So funny that I wanted to see if someone made art of it. Found something and sharing a link here that is pretty short and to the point, sweet and cool

(I hate mystery links so some info - it’s an performance art piece called “Plastic Bag” on YouTube, 5 mins long, about 60k views, where someone throws punches in the air at a plastic bag)

https://youtu.be/-W6rn2cWs2g

andsoitis|1 year ago

> Art will get made.

There's art. And then there is Art.

bookofjoe|1 year ago

"Whereof one cannot speak, thereof one must be silent." — Wittgenstein(Tractatus 7)

1vuio0pswjnm7|1 year ago

"I love articles that tell me why something they can't know is something they like to guess at anyway."

HN is an excellent place to find hundreds of articles and comments predicting the future, i.e., something authors cannot know but like to guess at anyway, especially regarding the future capabilities and predicted use of "AI". There has been a steady stream of this gibberish pertaining to "AI" ever since the announcement of ChatGPT.

In a recent HN poll a majority of voters indicated they thought that "AI" was overhyped.

The parent comment is an yet another example of a comment that tells us something the commenter cannot know but wants to guess at anyway. For example,

"Artists will use AI to make some of it. The debate about whether or not its art will be part of the art."

becquerel|1 year ago

This is an amusingly ignorant article. Its initial argument, that AI tools don't produce art because they offer meaningfully fewer knobs and dials to creators than a camera, is a classic example of mistaking the contingent for the essential. Those knobs and dials do exist. Download a copy of Stable Diffusion and see all the things you can tweak, iteratively, using the same seed, to work towards an image you desire. The same applies for text.

As it happens I have been using Claude quite extensively as a drafting partner over the past few months for writing a novel. I enjoy plotting, planning and editing, but not drafting, so I let it do the zeroth draft for me. It has been quite a productive arrangement.

typon|1 year ago

My friend is a very good painter, but not a famous painter. In the art world, its an open secret that most famous painters, especially ones that are old, don't really paint much. They hire "ghost-painters" to do the actual work for them, and they simply set the direction of the art pieces and collaborate with the hired-on-contract ghost-painters. My friend has painted for a bunch of these artists and when I ask her whether its unethical, she just shrugs her shoulders because she needs to pay for rent but also, importantly, she thinks that the painting really does belong to the artist setting the direction - that she's merely doing the grunt work.

Are the thousands of choices of which brush strokes to put where actually the seed of creativity? According to some artists - not really.

kubectl_h|1 year ago

> Its initial argument, that AI tools don't produce art because they offer meaningfully fewer knobs and dials to creators than a camera

Chiang is not saying this at all. I'm not sure how you interpreted it this way.

rpdillon|1 year ago

Reminds me to when Ebert argued video games can't be art.

https://www.rogerebert.com/roger-ebert/video-games-can-never...

The argument rings just as silly today as it was 12 years ago.

surgical_fire|1 year ago

The problem with this discussion is not only that the definition of "art" lacks a consensus. The main issue is that instead of being a descriptive, a lot of people call something "art" as a compliment. Which is utterly silly, by the way.

Because at that point "art" becomes sinply something that is "aesthetically pleasant", and that will change from person to person. "Art" as a compliment is useless.

If we try to use "art" as a descriptive, then we would need to draw a hypothetical Venn diagram, and define things that are art and things that are not, so we could try to categorize videogames, or whatever AI produces. This implies a lot more agreement than there is currently.

kubectl_h|1 year ago

Over the summer I have had about half a dozen conversations with people who work in non-technical fields (university administration, healthcare, government administration, teachers, etc) who are furtively using ChatGPT to augment their communication tasks. There is a hushed-tones quality to them admitting it.

I suspect the rate of individualized adoption of AI augmented writing is well beyond what a casual observer here on HN would think it is.

I also share Chiang's worry about this:

> We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?

I do not think OpenAI Et al. set out to create a self-perpetuating slop machine like this but it sure feels like this is where it is going. For individuals it improves their life I guess but when zoomed out there is something quite dystopian about it.

some_random|1 year ago

Artists have been spending decades redefining art to broaden it as much as possible, to the point where it seems to be defined now as anything that makes you feel something. It sure seems like AI art is making some people feel something alright.

mattmaroon|1 year ago

If Jackson Pollock made art I don’t see why whoever programmed the AI didn’t.

Retric|1 year ago

Do you feel anything when looking at DALE 3 images on this page? https://openai.com/index/dall-e-3/

I can’t tell you why, but I don’t really react to any of them or really any AI art I’ve seen.

bugglebeetle|1 year ago

The problem with this article is thinking that art is some special category vs. a demonstrative one. The reason AI can’t make good art (generally) is because it is limited by the ability of its users to have knowledge of how to make those choices, relative to their own abilities, taste, knowledge etc. This is just as true of code as it is of art. An LLM isn’t going to implement your request using efficient or novel data structures unless you know those things exist and can instruct it to use or assist you in developing them. While models as they currently exist (and are fine-tuned) may be biased toward code slop and art slop, this is because slop is the level at which most people are operating.

tkgally|1 year ago

For an essay that hinges on the notion that “art requires making choices,” I wish the author had chosen to delve a bit more into the question of what choosing is. Do humans really make choices? If so, how free are those choices? Is there something about human choice that will never be convincingly imitated by computer? If so, what is it, and how do you know it cannot be imitated?

dragonwriter|1 year ago

> For an essay that hinges on the notion that “art requires making choices,” I wish the author had chosen to delve a bit more into the question of what choosing is.

For an essay that hinges on the notion that "art requires making choices", and attempt to apply this to AI image generation, and even specifically tries to draw a contrast between that and photography, I wish the author demonstrated even a superficial knowledge of what choices go into AI art and how it compares with photography, much less delving into the kind of deep philosophical examination of the underlying premise that you propose. But it looks like exploring the subject beyond the shallowest text-box-only UIs presented by a few big firms is too much to ask before the author does exactly what he says was naively done early on about photography.

doph|1 year ago

Exactly my thoughts. I think people who make arguments like Chiang's are unwilling to examine our own decision making process, and in particular are unwilling to entertain the idea that it is as mechanistic as an artificial neutral network.

BrannonKing|1 year ago

I don't need AI to make art; I need it to extrapolate. Suppose I want to make a movie or a game. I can come up with some distinct art for characters and places and things. I want the AI to extrapolate from that information and build the rest of the world. I don't have time to make the artwork for the entire world/movie/etc. I want to feed it some minimal amount of artwork plus a bunch of scene scripts and end up with a unique-looking movie. I could then go back and enhance areas of the movie by uploading additional artwork (or descriptive text) to scenes that I feel were lacking something.

OJFord|1 year ago

If you give it the year's headlines and today's newspaper and and ask for a picture that's a social commentary on some current affair, how is that not art?

And you didn't prompt it any more than commissioning a piece, or making a thematic suggestion to a painter friend.

You may not like its art, and it may not come up with some whole new original style, but that doesn't mean it isn't making art in known styles.

TFA is just a bit of a silly fearful protest, IMO.

giraffe_lady|1 year ago

Art is an act of human expression.

GaggiX|1 year ago

>it has to fill in for all of the choices that you are not making. There are various ways it can do this. One is to take an average of the choices that other writers have made, as represented by text found on the Internet; that average is equivalent to the least interesting choices possible, which is why A.I.-generated text is often really bland.

The model can't output the average because the average is usually completely meaningless, that's why it's a generative model and not a regressive one. As always, these articles are made by people who don't really understand the technology, and create their own interpretation on how it works, whether they are right or not at the end.

Workaccount2|1 year ago

One thing we can guarantee is that human hubris isn't going to go away.

js8|1 year ago

The author seems to argue that, before photography existed, if someone commissioned a painter to create a portrait (without further qualification), then the painter didn't create an art piece.

I think it's pretty clear that generative AI makes a lot of small decisions. They might not be groundbreaking, or novel (as they aren't in a custom portrait), or somehow lack overall vision, but they are there.

elawler24|1 year ago

The definition of art seems to be the polarizing issue here. "But is the world better off with more documents that have had minimal effort expended on them?" Value is attributed to high-effort tasks that are a scarce resource [1]. When AI creates something in a short amount of time, derivative of the past, and available to everyone, it will be considered cheap. By this definition, there's still room for mass-market creative works to be created by AI. Netflix already does that, even with humans in control. But high economic value or notoriety will only come from taking the first draft and doing something physically or intellectually innovative with it for the first time - which I relate back to what Chiang writes.

[1] https://www.sciencedirect.com/science/article/abs/pii/S09218...

gmaster1440|1 year ago

It's refreshing to see a science fiction writer underplay the capabilities of AI, but if anyone can speak to the nuances and implications of generative AI on art and writing it's probably someone like Ted Chiang.

We can debate his generalized definition of art as making creative choices that carry subjective, intentional, and performative value for human beings (and therefore LLMs fall short of this), but I think he makes a couple strong points nonetheless:

1. The argument others like François Chollet have also made, that we have yet to see any AI systems capable of exhibiting intelligence beyond stylistic mimicry or forming generalized knowledge about concepts from large data sets.

2. The subjective experience of human interaction is valuable and desirable, and will remain so in the face of increasingly capable models, not because they won't be able to compete in producing inspiring art or enjoyable fiction, but because of the inherent primacy of human intentionality and experience.

pessimizer|1 year ago

> we have yet to see any AI systems capable of exhibiting intelligence beyond stylistic mimicry or forming generalized knowledge about concepts from large data sets.

I've said this long before useful LLMs, but I don't think we've observed this in humans, either. Human creativity can be put into two very similar categories:

1) Metaphor; the arbitrary application of the dynamics of one thing to another. "What if information is like water?" "What if the economy is like the human body?" "That woman is like a bird."

2) Bad copies. When you see someone's output and try to imitate it, but have to speculate about the creative and mechanical process that resulted in that output. You sometimes guess right and sometimes wrong, but the output is similar. Then you vary the parameters in order to create a new example, but since your process was different, with different parameters and different interactions, you create something different than the person you copied would have created.

1+2) both randomly often create emergent effects that are then copied by others, sometimes badly.

This is how Japanese metal can be the result of Black Americans copying songs from musicals and English/Irish drinking songs, British people copying the blues from Black Americans, Americans copying British Invasion music and NWOBHM, and then Japanese people copying American metal.

swayvil|1 year ago

We should ask an artist if it's art. The testimony of an expert firsthand witness is vastly superior to interpretation of a secondhand abstraction by inexpert persons.

f33d5173|1 year ago

> the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception

Sure. But suppose we had an AGI that was just as smart as a human: clearly it would be able to make all these decisions just fine and make art. If current AIs are someone between that and dirt, then they ought to be able to make decisions of less complexity, but still of some importance, to the final product. As AIs improve, we would expect the decisions they make to become more complex.

Recall that traditional artists have always had a number of assistants to help them. They produced the sketch and outline, and had other artists - skilled, but not as much as them - fill in the details. A modern artist, who already is less skilled than these artists, and futhermore has less need for their creation to stand the test of time, can benefit from an even less skilled assistant helping them.

antimemetics|1 year ago

„AI“ (a stupid term to begin with) is just a tool like any other - you can use it to make art.

Of course it’s not going to be creative on its own - it obviously is not intelligent.

But for me comfyui is an incredibly cool tool to be creative.

Such a boring topic after all - all the noise it attracts won’t amount to much once people understand this technology

rvense|1 year ago

I think it's an interesting debate: what is this thing we've made? And what does its existence teach us about ourselves?

You're right that just going yes it is - no it isn't isn't so interesting, but this mainly stems from the fact that "intelligence" is a poorly defined, pre-scientific term. And really most times you're talking about about whether some X is a Y, you're not so much talking about X as about your definition of Y.

I think the thing is, with LLMs/Generative AI we see some aspects of ourselves, but not enough that we can accept that it is fully like us, hence the resistence. To me, the answer is clear: what is usually called intelligence is actually several different things, of which whatever it is an LLM does is one.

Devasta|1 year ago

People do understand it, no one is creating art by thumping "tracer overwatch big boobs trending on artstation" on a keyboard and then heading to lunch.

This idea that people who disagree couldn't possibly understand is misguided at best.

mistrial9|1 year ago

does the tooling augment a human, or is the tooling sold to replace a human?

pessimizer|1 year ago

AI can't make art in the way we define art, because we usually define art by who made it, rather than any characteristic of the object in and of itself. But if anything, a criterion that makes an object into art is its uselessness relative to its value:

Panting a fence, not art because it keeps the wood from rotting.

Painting a fence hot pink? Art because there's no good reason to paint a fence that color.

If we discover that birds hate hot pink fences, and that makes them last longer? Not art again.

A rich guy pays a million dollars for a hot pink fencepost? Art. Who's the guy who sold the hot pink fencepost? Does he have any other colors?

squidbeak|1 year ago

Those analysts and commentators who use the shortcomings of a nascent technology to rule out possibilities far down the line are extremely foolish.

Whatever anyone thinks about the limitations of LLMs, or whether AI in its current form is sales hype - can anyone sensibly claim that AI 1000 years from now won't be capable of an artistic sensibility? Until there's some proof that there is a secret ingredient in human consciousness that can never be developed by AI - not even a self-aware AI - anyone attempting to lay an imaginary ceiling over the tech is deceiving themselves.

AIorNot|1 year ago

The problem with this article is that presupposes that the technology will remain the same and in its current state id agree it isn’t art

- but even assuming the rate of advancement slows down, eventually it will be making Art…

Havoc|1 year ago

The market for Art is very small. For the vast amount of visual content some generic AI placeholder is entirely sufficient. Won’t end up in an art gallery but it also doesn’t need to.

nwoli|1 year ago

Loras on things most people don’t make loras for are already producing amazing art in my experiments (far away from the cookie cutter ai output style)

matt3210|1 year ago

LLMs are just art as code. From the article:

> he crafted detailed text prompts and then instructed DALL-E to revise and manipulate the generated images again and again. He generated more than a hundred thousand images to arrive at the twenty images in the exhibit.

swayvil|1 year ago

The hand is wiser than the mind.

Doing art via the mind is like breathing through a soda straw.

Mind promises and promises but just keeps missing the mark.

I can make a perfect line with my hand in a moment. I can spend a year creating a "perfect line drawing device" and never get there.

The promise of mind is so tempting tho. Succumb to it and you end up living in a flavorless cartoon.

CuriouslyC|1 year ago

I'm disappointed in Ted here. For a writer that likes to delve into the possibilities of tech, he's sharing a surprisingly underbaked view. People outside of tech seem to think that AI creation is just "fat finger a prompt -> take output and claim to be an artist on the interwebs" but the reality is that all the people I know who actually call themselves AI artists do photobashing, image2image, controlnets, inpainting, custom models, etc. Likewise, the people I know using AI to write fiction are meticulously developing characters, timelines, scenes, story arcs, style samples, etc and using AI to handle creating rough drafts that they then hand tune.

ben_w|1 year ago

I believe he demonstrated awareness of the difference between lazy use and effortful use; he appeared to me to acknowledge the latter as art:

"""The film director Bennett Miller has used dall-e 2 to generate some very striking images that have been exhibited at the Gagosian gallery; to create them, he crafted detailed text prompts and then instructed dall-e to revise and manipulate the generated images again and again. He generated more than a hundred thousand images to arrive at the twenty images in the exhibit."""

becquerel|1 year ago

Something I've had in the back of my mind is that gen AI has enabled a new generation of outsider artists. That's all. It has lowered the barriers to entry for creativity so much that a whole host of people who have had no formal training or dialogue with Real Artists are able to jump in and just make things they want to see. No surprise their creations are ugly or bad or soulless or weird by conventional standards; that's the norm for outsider art

ToucanLoucan|1 year ago

> People outside of tech seem to think that AI creation is just "fat finger a prompt -> take output and claim to be an artist on the interwebs"

Probably because that's the predominant experience of people encountering AI art on the Internet. I have no doubt whatsoever that there are people out there using AI to do interesting things, but like with basically every technology, the vast, vast majority of the output you're going to see is people who see a labor saving device that can make doing... something, at scale, brain-dead easy. Be that generating shitty coloring books and selling them to overworked parents, generating shitty books on niche, dumb topics like the crystal healing woo shit and selling them to uncritical audiences, or just generating page upon page of boring, shitty artwork and uploading it to DeviantArt and paywalling it.

And that's just individuals. Many online businesses are actively enshittifying themselves too, adding AI generated content alongside (or in place of) human created content. On the note of DeviantArt, they built an AI generator into the damn site so people can fill it with even more low-effort garbage than was already getting uploaded. And of course Google now headlines your search results with a shitty LLM summary that runs the gamut between "dull, uninteresting summary of somewhat relevant information" to "complete nonsense that actively endangers lives" while also depriving even more websites of even more traffic that gave them whatever information in the first place.

Like, again, I have no problem envisioning some people and some orgs some place are doing interesting stuff with this tech. However I cannot overemphasize how utterly, completely, totally dog-shit my experience personally has been with it and how harshly I now tend to judge any project parading around AI integration. I'm open to being wrong... but I'm usually not.

There was that Vaudeville game that made the rounds that I felt was at least trying to do something interesting with LLMs, but like... the tech just wasn't there yet. You're talking to characters and can say basically whatever you want, and then an LLM generates an answer based on the context of that character and it's read back to you by text-to-speech. It's... neat? For like ten minutes, and then you're just playing a detective game with impressively bad writing and zero-effort VO, and the fact that the entire game was built of pre-built, unchanged assets made it feel incredibly cheap and low-effort. The only thing it's really good for is as streamer fodder, weird garbage for people to overreact to and fuck with for an audience.

H1Supreme|1 year ago

> Art is notoriously hard to define, and so are the differences between good art and bad art.

Which makes ChatGPT (or whatever) just as valid as any tool for creating art.

> What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception.

As a life-long artist and musician, I agree with this. However, I find the artist's perspective lacking from this article. For many artists (myself included), the process is why we do it. It's truly therapeutic. I honestly cannot imagine my life without creative expression. Whether entering a prompt fulfills that for someone is up to them to decide. But, for me, it would remove the parts of creating art that give me joy.

jncfhnb|1 year ago

Mostly just highlighting how misinformed the author is. AI art tools are very much able to adjust small details

ToucanLoucan|1 year ago

Same. I don't know if I would call myself an artist, despite creating art for... Jesus, most of my life at this point, and making a bit of cash off it, at least enough to cover the power bill each month. I went into programming because I was keenly aware of how hard it is to make a living as an artist (and getting harder all the time!) but like... I simply cannot fathom enjoying "prompt engineering" nearly as much as my current creative processes.

I've used AI generators a few times because they're interesting little toys, but fundamentally, a creative process is literally thousands of not millions of tiny decisions that are informed by other decisions. If anything, that's what I would call an "artist's voice" in any given creative product, is an at least somewhat consistent through-line through those decisions that gives the final piece the "life" that is so clearly missing from AI art, because all those millions of decisions, instead of being made by one or a few "voices," if you will, is replaced by millions of weighted-average decisions designed to reduce "error" in the product. It's quite literally soulless and people pick up on this, no matter how much the AI lovers want to scream Luddite at me, it's true.

That's not to say it's completely without purpose, I think this stuff is going to do gangbusters for corporate news pieces, blogs, spam sites, etc. If you want royalty free imagery to use for a thing, and don't give much of a shit about what it is, AI can handle that quite well. But I simply can't fathom someone with an intention, who wants to say something with an art piece using AI much, if at all.

AlienRobot|1 year ago

If you ever wonder why everything made with Stable Diffusion looks the same, it's because it can't generate images that are too dark or too bright. The denoising process involves recognizing shapes and shapes naturally have bright spots and dark spots. If you try to render "sea at night" you'll get some huge, bright moon for example.

The "AI artists" using this tool lack the technical and artistic competency to realize this. They didn't write the algorithm, draw the dataset, or train the model. They prompted. They have the smallest amount of creative input into this whole pipeline.

I do believe AI can be used in the process to create art as it's just an image generator like fractal art, but the problem is most people are going to use AI not as a means to create art, but as an end. You could fix the problem above by simply importing the image into GIMP and changing the brightness, but nobody does that because they aren't interesting into creating an art piece with a set goal in mind, they're just being entertained by generating dozens or hundreds of images with this technology.

Amusingly, you could also just type text in GIMP. Instead there is now something called "flux" that can do text literals.

While I see the point in making a prompt interpreter capable of generating text literally, if I were creating something, I wouldn't let an AI randomly pick a font, color, weight, serifs/slabs, etc. for me. These are creative choices in design that make all the difference. Prompting gives the illusion of (creative) choice. You get something that looks good, but "getting something that looks good" is the default state. Anyone can do that. It's the AI art equivalent of drawing a stickman. The prompters just don't realize it because they're comparing themselves to to artists of other media, not comparing themselves to other AI artists.

When everything is AI, and anyone can generate an image with a prompt, the whole market will be so saturated with this (perhaps it already is at the rate these are generated) that all the novelty will be gone.

It was cool when AI was able to generate video, just like it was able to generate text. But in my opinion, those are feats of the technology, not artistic feats. The piece itself isn't interesting. It could be any video. Just the fact that the tech can do this is impressive. But it's just the tech that is impressive, not its output. Once the tech can do it once, it can do it every time, so the second time AI generates video is never going to be as impressive as the first time. By the thousandth time it will be as impressive as my ability to send this message to the other side of the world at the speed of light.

dragonwriter|1 year ago

> If you ever wonder why everything made with Stable Diffusion looks the same

Everything you know is made with Stable Diffusion looks the same because if it doesn't look the same you probably don't know it was made with Stable Diffusion.

> The "AI artists" using this tool lack the technical and artistic competency to realize this.

No, they don't. It's been a frequent comment in the AI art community and a thing for which the community has sought and produced both in-generation and auxiliary-tooling solutions for from very early on.

> They didn't write the algorithm, draw the dataset, or train the model.

Perhaps not for the base model, people int he AI art community have done all three of those for improvements to and tools built around the base models and the original code implementation of them.

> I do believe AI can be used in the process to create art as it's just an image generator like fractal art, but the problem is most people are going to use AI not as a means to create art, but as an end.

Most of the people who are using any tool that can be used artistically are going to use it at the most superficial level. Is that true of AI image generators? Sure. But no moreso than it is true of, say, pencils.

> You could fix the problem above by simply importing the image into GIMP and changing the brightness, but nobody does that because they aren't interesting into creating an art piece with a set goal in mind, they're just being entertained by generating dozens or hundreds of images with this technology.

People are using AI image generation with a set goal in mind, and people absolurely do import generated images into to traditional image editors for adjustments. Though a lot of the people that really know what they are doing have that built into their workflows, reducing the need to do manual spot correction in a separate editor.

> Amusingly, you could also just type text in GIMP. Instead there is now something called "flux" that can do text literals.

Image generation models have been able to do text to a certain extent for a while, and improvements in text generation have been a major trumpeted feature of many recent model releases. Flux isn't interesting because "it can do text literals", it is interesting because the community has discovered that it can be finetuned (specifically, that LoRA can be trained for it) that will allow control of text style, similar to fonts.

I wasn't aware that GIMP could comform typed text to the implicit 3d shape of the surfaces it is being placed on in a 2D image, though.

> When everything is AI, and anyone can generate an image with a prompt, the whole market will be so saturated with this (perhaps it already is at the rate these are generated) that all the novelty will be gone.

Probably. So what? Novelty isn't the point in every image people produce. Lowering the cost and effort to produce basically "looks good" images for lots of casual uses isn't, itself, an advance in fine art, sure. But it is, in itself, useful.

dbrueck|1 year ago

Because the definition of 'art' is somewhat philosophical, the more salient question is "will AI make something indistinguishable from art?" and the answer is easy: yes.

swayvil|1 year ago

There are none so confident as the ignorant.

Ask an artist what's art? Hell no.

ben_w|1 year ago

Some I agree with, some I disagree with. I think this author mainly speaks to the idea of art being the human equivalent of a peacock's tail: the effort is the point, not the result.

Myself, I like results: A metaphor about the scent of roses is just as sweet, after I find it came from an LLM.

> I think the answer is no. An artist—whether working digitally or with paint—implicitly makes far more decisions during the process of making a painting than would fit into a text prompt of a few hundred words.

In the art of words, Even the briefest form has weight, Prompt and haiku both.

> This hypothetical writing program might require you to enter a hundred thousand words of prompts in order for it to generate an entirely different hundred thousand words that make up the novel you’re envisioning.

That would be an improvement on what I've been going through with the novel I started writing before the Attention Is All You Need paper — I've probably written 200,000 words, and it's currently stuck at 90% complete and 90,000 words long.

> Believing that inspiration outweighs everything else is, I suspect, a sign that someone is unfamiliar with the medium. I contend that this is true even if one’s goal is to create entertainment rather than high art.

I agree completely. The better and worse examples of AI-generated are very obvious, and I think relate to how much attention to detail people have with the result. This also applies to both text and images — think of all the cases in the first few months where you could spot fake reviews and fake books because they started "As a large language model…"

The quality of the output then becomes how good the user is at reviewing the result: I can't draw hands, but that doesn't stop me from being able to reject the incorrect outputs. Conversely I know essentially nothing about motorbikes, so if an AI (image or text) makes a fundamental error about them, I won't notice the error and would therefore let it pass.

> Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it.

This has been the case so far, but even then not entirely. To use the example of photographs, even CCTV footage can be interesting and amusing. Yes, this involves filtering out all the irrelevant stuff, and yes this is itself an act of effort, but even there that greatest effort is the easiest to automate: has anything at all even happened in this image?

To me, this matches the argument between the value of hand-made vs. factory made items. Especially in the early days, the work of an artisan is better than the same mass-produced item. An automated loom replacing artisans, pre-recorded music replacing live bands in cinemas and bars, cameras replacing painters, were all strictly worse in the first instance, but despite this they remained worth consuming — even in, as per the acknowledgement in the article itself: "When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure."

> Language is, by definition, a system of communication, and it requires an intention to communicate.

I do not see any requirement for "intention", but perhaps it is a question of definitions — at most I would reverse the causality, and say that if you believe such a requirement exists, then whatever it is you mean by "intention" must be present in an AI that behaves like an LLM.

> There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you.

Despite knowing how they work, I am unsure of this. I do not know how it is that I, a bag of mostly-water whose thinking bits are salty protein electrochemical gradients, can have subjective experiences.

I do know that ChatGPT is learning to act like us. On the one hand, it is conceivable that it could use some of its vector space to represent emotional affect that itself will closely correspond to the levels of serotonin, adrenaline, dopamine, oxytocin, in a real human — and I can even test this simply by asking it do pretend is has elevated or suppressed levels of these things.

On the other, don't get me wrong, my base assumption here is that it's just acting: I know that there are many other things, such as VHS tapes, which can reproduce the emotional affect of any real human, present any argument about their own personhood, to beg to be not switched off, and I know that it isn't real. Even the human who gets filmed and their affect and words getting onto the tape, will, most likely, be faking all those things.

I have no way to tell if what ChatGPT is doing is more like consciousness, or more like a cargo-cult's hand-carved walkie-talkie shaped object is to the US forces in the Pacific in WW2.

But when it's good enough at pretending… if you can't tell, does it matter?

> Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling.

> it’s the same phenomenon as when butterflies evolve large dark spots on their wings that can fool birds into thinking they’re predators with big eyes.

100% true. Even if, for the sake of argument, I assume that an LLM has feelings, there's absolutely no reason to assume that those feelings are the ones that it appears to have to our eyes. The author gives an example of dogs, writing "A dog can communicate that it is happy to see you" — but we know from tests, that owners believe dogs have a "guilty face" which is really a "submission face", because we can't really read canine body language as well as we think we can: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4310318/

Also, these models are trained to maximise our happiness with their output. One thing I can be sure of is they're sycophants.

> The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.

> By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills.

Both fantastic examples.

slowhadoken|1 year ago

Art is made by people. It’s not complicated.

Almondsetat|1 year ago

Is that just your axiomatic definition or do you have an actual reason to claim so?

GaggiX|1 year ago

You have opened a big can of worms that was explored a century ago, if I throw a bucket of paint on a canvas and let it drip, is it made by people? I could argue that the entire painting was made by gravity, which is definitely not human. Where do you draw the line? Is it art if the work is completely digital and made with digital tools, what if I use smart brushes? A camera?

CuriouslyC|1 year ago

Art is perceived by people. It's not complicated.