top | item 46033843

(no title)

rsfern | 3 months ago

I think this is not quite the right analogy. A better analogy is procedurally generated music, because that’s what model-generated music is. But just like with LLM code generation, the input to the program is natural language (or maybe multimodal image/audio/whatever), and the program is implicitly defined by learning from examples of music.

I think a lot of the issues are the same. Like you might expect the model to go off the rails if you venture away from the bulk of the training distribution. Or maybe the b most effective way to use it creatively is in some kind of interactive workflow revising specific chunks of the project instead of vibe-coding/composing from whole cloth.

discuss

order

moritzwarhier|3 months ago

Where GenAI goes off the rails it starts being interesting for art, IMO :)

edit:typo