top | item 33184419

(no title)

bloep | 3 years ago

There are quite simple tricks to avoid repetition/copying in NNs, e.g. by (1) training a model to predict the "popularity" of the main model's outputs and penalizing popular/copied productions by backpropping through that model so as to decrease the predicted popularity, or (2) by conditioning on random inputs (LLMs can be prompted with imaginary "ID XXX" prefixes before each example to mitigate repetitions), or (3) by increasing temperature or optimizing for higher entropy. LLM outputs are already extremely diverse and verbatim copying is not a huge issue at all. The point being, all evidence points to this not being a show stopper if you massage these evolutionary methods for long enough in one or more of the various right ways.

discuss

order

p1esk|3 years ago

I'm not sure what you mean by "backpropping through that model so as to decrease the predicted popularity". During training, we train a model to literally reproduce famous chunks of music exactly as they are in the training set. We can also learn to predict popularity at the same time, but we can't backpropagate anything that will reduce popularity, because this would directly contradict the main loss objective of exact reproduction.

Having said that, I think the idea of predicting popularity is good - we can use it for filtering already generated chunks during post-training evaluation phase.

I don't think the other two methods you suggest would help here, we want to generate while conditioning on famous pieces, and we don't want to increase temperature if we want to generate conservative, but still high quality pieces.

It's true that we (humans) are less sensitive to plagiarism in the text output, but even for LLMs it is a problem when it tries to generate something highly creative, such as poetry. I personally noticed multiple times a particular beautiful poetry phrases generated by GPT-2 only to google it and find out they were copied verbatim from a human poem.

bloep|3 years ago

What I had in mind was kind of like a reward model that is trained by on longer outputs that have a very high similarity to training examples. Something similar has been done to prevent LLMs from using toxic language. You'd simply backprop through that model like in GANs. And no it does not contradict the overall training objective completely because the criterion would be long verbatim copies and it would not affect shorter copies of sound fragments and the like which you would want a music model to produce in order for it to sound realistic and natural.

sdenton4|3 years ago

Meanwhile, the music industry is full of copyright cases brought over matching combinatorial fragments... Humans have the same problem in this case.