Having recently spent some time browsing the submissions of /r/deepdream and seeing how far computer generated imaging has come, I have no doubts that one day we will be listening to wonderful sounds that are generated by artificial intelligence. However an intrinsic part of the music (to me, anyways) as an art form, is the humanity of it. A computer can generate lyrics that are relatable on the surface, but knowing that there isn't a person behind those words, feeling what I'm feeling, definitely detracts from the experience of the whole.
That being said, this is incredibly exciting to me, and I look forward to seeing how it progresses, and probably challenges my ideas of what music is
> but knowing that there isn't a person behind those words, feeling what I'm feeling, definitely detracts from the experience of the whole.
I recently had a discussion about this with a musician. I said that I didn't like when music was produced (and certainly lyrics written by) somebody else than the performer. I said it took away from the experience of 'getting to know' the person I was listening to.
She basically replied, that I was being extremely old fashioned, and that this 'idea' of music was very harmful for the business. She said it prevented people from working together and each contributing what they did best.
If she's right, I guess we just have to interpret the music on its own, and not see it as a mental state of some individual creator. Maybe this is related to, when authors are annoyed that people identify them with their main characters. In any case, non-individual art doesn't seem to be going away.
> but knowing that there isn't a person behind those words, feeling what I'm feeling, definitely detracts from the experience of the whole.
It would be very interesting to create a secret fake person profile and publish AI generated music under that fake name and see if humans feel identified with the lyrics and music. See how far it goes, who knows, maybe create the next super star. Then to later reveal the secret.
I've not been following the AI-music-making scene in depth so maybe this has already been explored: Instead of (or in addition to) focusing on creating AI that can compose entire works, is anyone attempting to use AI to assist composers/musicians in getting over 'rough spots' in their own music.
For example, I often write pieces of music where I can pinpoint very specific sections where the harmonic, melodic or rhythmic choices (separately or in any combination) sound trite or boring/predictable. I'd love to be able to feed the entire song into a machine, point out those specific trouble spots to it, and have it generate alternatives for just those sections, perhaps on a spectrum from 'not-too-far-from-the-rest-of-what-you've-written' (harmonically/melodically/rhythmically) on one end to 'way-out-there' on the other.
Even if none of the output on its own was usable it would still have value in stimulating my imagination with ideas that otherwise wouldn't have occurred to me and that I could build on or refine.
> However an intrinsic part of the music (to me, anyways) as an art form, is the humanity of it.
I agree with the sentiment. But from a perceptual perspective, it's getting harder and harder to distinguish what is synthetic and what is organic. The trumpets, strings, and keys you hear on a song you like on the radio? It's very probable that they weren't played by humans: they might have been programmed and the sounds come from a massive, hyper sampled sound bank.
If we did an experiment, and recruited even expert musicians, and we asked them to classify sound clips in 'played/programmed' categories, I bet they wouldn't get them right.
I encourage you to listen to more music, particularly if "humanity" is intrinsic to the art of it. There is plenty of music made by humans that is rather un-humanlike. Noise, Drone, Chiptune, Glitch, Musique Concrete, Field Recordings, etc all explore the outer limits of music as an artform, and many have no more of a human element than somebody was there to create it (incidentally, most music in those genres doesn't often have lyrics).
What this will change probably isn't pop music, it's soundtracks for TV commercials, video games, shows, movies etc.
Currently to put music in something you need to license it somehow, in most cases the generic music you hear in most media is sourced from a company that has a library of royalty free songs that they sell for a pretty hefty license fee.
Imagine being able to instead go into some software and punch in some keywords to describe the scene, the duration, track what happens in the scene on a timeline of some sort and let the computer render original music for the score. No middle men, no musicians, no royalties or licenses other than for the software. It would change the industry overnight.. and make being a professional musician even less promising of a career choice than it is today...
That's a really good point, there's a market opportunity for a soundtrack generator that does a decent job of producing usable generic music for video productions.
People want and need other people around, doing human things. Even if the musical output was impressive, I can't think of people developing a deep relationship with it outside of novelty. Algorithms are cold, and music is usually very much the opposite - I can see an "uncanny valley" effect cropping up with the imitative forms: people hearing something that is supposed to come across as laden with emotion, instead leads people to revulsion. What do we get out of computers composing songs that are supposed to relate to the fragility of the human condition?
Sure, the algorithms, once sufficiently advanced, could probably trick us into thinking that certain examples of generative music were made by a person and then later reveal its algorithmic origin to prove that "the humans are stupid" and "the google algorithms are clever" but what are we actually proving here?
Can a computer devise new artistic forms that have some genuine impact on people - can a computer come up with Bacon's Triptych of George Dyer outside of regurgitating fragments of what it already has seen? What do we get out of a computer aping the alcohol-fuelled sweaty anarchic performances of The Black Lips?
The interesting stuff will be to see if this goes to other places that music has not yet gone - some new composition method - manipulation of frequency in ways that humans have not yet devised.
I see this sort of application having a lot of use in the kinds of derivative pop music developed by ensembles of songwriters and manufactured purely to generate radio hits. Bacon's Triptych of George Dyer is genius. The average person listening to Taylor Swift just does not care about Bacon's Triptych of George Dyer.
In a way, Magenta's job is not besting Bach. By the definition of Bach (a human being who changes the way we view and enjoy music), a non-human being cannot best Bach. Magenta's job is besting a much simpler, if equally challenging role - Max Martin, or the writers of "Let it Go".
As it turns out, this kind of music is already pretty formulaic. Much has been written on repetitive chord progressions being spammed across hundreds of famous singles. In a way, artists shouldn't fear the potential of these technologies besting them - they should thank them.
Freed now are artists from loading their albums with eye-rollingly generic lead singles that they immediately get sick of ("Stairway to Heaven", "Creep", "Smells Like Teen Spirit") because record labels know that's what will get the most radio play. You can just let the machine do those. Now, an artists' reputation is determined purely by his relative mettle against other human artists.
Even if the musical output was impressive, I can't think of people developing a deep relationship with it outside of novelty.
There are plenty of times and places where people want high quality "music" but don't want to actually engage with it on any level - the music that tells you you're still connected when you're waiting for a conference call, the low volume background music in some retail environments, the music in a lift. If "pleasant musical noise" could be generated automatically and to a sufficient quality I think there'd be a pretty decent market for it.
Being able to add to the song with commands like 'add a psytrance bass line', even within predefined parameters, to dynamically generate an entirely new bass line from other songs in the genre.
Maybe you could instantly add an improvised violin melody by telling it a style given that the chords/key from the human band are consistent.
Sentiment analysers could tweak the music based on crowd reaction towards musician defined goals and learn those pre-sets over time.
If music is a synchronisation layer between humans, maybe machine learning could help us to synchronize even more closely.
The problem is the reliance on midi but it shouldnt be hard to build a credible house music generator even with midi byt relying in midi music data for training wont get you something that sounds like a jimi hendrix. But doing automated dance music makes sense.
Keyboards have been popping out automatic C-maj, A-min, F-maj, G-maj progressions for fourty years now. And with the typical toolset for electronic music, it will be easily capable to create something similar.
But the masterpieces of music demand something altogether different. Beethoven's break with consistency by switching keys in the adagio of the 5th piano concerto. Wagner introducing the Tristan chord, Berg using C-major only as a joke in Lulu, when the word money is mentioned.
Add to that ingenuity the personal drama behind music. Bach's crisis of faith leading to 'What God does is well done' to Mahler losing so many of his children that he composed the Kindertotenlieder ("Songs on the Death of Children") to the origins of a simple pop song such as 'Tears in heaven' that moves people tremendously... Music is shaped by our biological life cycle, not by that of a computer program.
I think algorithms are perfectly capable of generating music we'll consider creative an beautiful - the trick is, you can't tell people it was done by a computer!
Look at the examples you've provided. They gain additional meaning because of the context information you've given. When dealing with art, people like to wonder, what the author thought? How did they feel? People try to connect. With machines, they know there is no human to connect to on the other side, so the work will be considered inferior.
Not to take away from your point, but none of the above stop an AI from replicating that behavior. It's just a matter of the state-of-the-art catching up.
Thats a very narrow definition. Music is sound we interpret. There is no reason why computers cannot "compose" meaningful to us music. Of course it will probably be by happenchance, but nonetheless. Also computers are good at finding patterns. There is definitely a chance they (I should write "it" but cant reformulate so it sounds good) can find patterns interesting to us.
If AI generated music works out and becomes popular will people actually optimize for quality of music or instead you would have an Engineering team design the next Pop star which would be optimizing for Profitability and marketability.
The more interesting application of this IMO is not replacing current popular/practicing musicians and artists but enabling non-professionals who have some musical inclination but limited time/skill/etc. Imagine a synthesizer for dance music with knobs for high level parameters like "grimy-ness" or "shine." Rather than auto generate a complete song with no user input, semantic control could be offered using learned models, allowing a rewarding experience with musical creativity without having to spend a lot of time learning the technical intricacies of these semantics.
Many of the tools you describe are already out in the field. Ableton Live and MAX4LIVE really opens up whole new sectors of tools for those who might be interested. There is a learning curve for music making software though, and it can be rather steep. That said, Instant Haus [1] does sort of what you describe, though it is targeted more for practicing musicians. I'm a fan of GarageBand's "Drummer" as well.
AI celebrities with distinctive personalities will be fun. I can't wait to see a real-life (as it were) Hatsune Miku making her own content.
EDIT: And then the inevitable schism among fans about what their base personalities should have been like, and the resulting clones, throwing the AIs into an existential crisis when confronted with their alternative versions..
There's music and then there's music. I'm glad they are at least thinking about long narrative structures.
Ultimately I don't think this is a very worthwhile because I personally believe the entire definition of art and music is a production that is filtered through the human experience. The same piece of music would mean more to me coming from a human than from an AI or program. If someone told me it was from a human and then later told me it actually came from an AI, it wouldn't really accomplish anything deep; it'd just make me feel tricked.
Well, look at it this way: The AI is really not so different from us in intelligence (although presently dumber), but it's different in motivation. It's has only one single "desire", namely to understand you and write music that connects with you. It's not trying to fool you, it's trying to understand the human experience itself, and tell back to you what it is seeing.
Isn't it OK to let ourselves be moved by its love letters to humanity?
Every Monday, there are hundreds of tweets from Spotify users who declare that the Discover Weekly algorithm understands them better than anyone, that they want to marry it etc. (This is a remarkable testament to neural network AIs. Not many people want to marry Netflix or Amazon's recommender system, to put it like that!)
Of course that only finds music for you, it doesn't create it from scratch. But given the immense size of the library it searches through, that's impressive enough that many really get the feeling that it knows you, understands you. I hope generative algorithms will be that good one day, and I won't waste time worrying that it's fake once they are.
At least leave us humans some of the joys in life - art being one of them. Automate the bland stuff so we have more time to idle away our days mucking around with an instrument or paints or clay.
If their view of music is a sequence of notes (in midi) then the product won't be very interesting. In a lot of modern music genre sound design is as important as composing the note sequence.
For the generation part, it might be a good idea to separate "music" into its parts (like bass, drums, melody, chord-rhythm, chord-ambient-background, and other stuff like secondary melodies, horn section arrangements or whatever). A unique model for each one of those could learn differently. Each of those parts could be broken down even further too.
If Google does it, they can as well sneak in some of these to Play Music play lists, see the response on no of replays and stuff and gauge the overall quality and likability. Basically crowd source the reviews.
Ultimately, I think that it is possible for a machine to generate music, but were going to be talking about something that is more or less an AI at that point, after all music has a soul.
I don't know about that. The whole idea of "having a soul" is so nebulous that it's hard to build a view on top of it. Do we have souls? If yes, is it possible to mechanically replicate a soul? If not, why not? What's a soul?
If instead you define art as the communication of things that can't be fully expressed by direct sterile language, that's something I can get behind. Going that route, I think that getting AI to create real art has some pretty clear challenges, but there are also ways whers I could see it being better than humans.
Depending on how the AI learns and analyzes it has a chance to have a unique perspective on human communication. From there, it can find new and innovative ways of identifying gaps between primal human experiences and sterile human communication - and with a unique understanding of human communication could come up with fascinating ways of bridging those gaps.
This is all spitballing, but I think AI could eventually be the main new frontier in art.
What about it sounds like a moonshot? It's using an existing Google open source project and is looking to use it for an artistic purpose along with building a community. Honestly it sounds like a 20% project from a couple of people on the TensorFlow project. Almost 0 cost.
I'll always like that terrible Blurred Lines song because I had a really fun time dancing to it at a friends birthday party. I'm pretty sure Macbook is a better person than Robin Thicke.
Meaning in music can come from the creator or the listener.
[+] [-] my_username_is_|9 years ago|reply
That being said, this is incredibly exciting to me, and I look forward to seeing how it progresses, and probably challenges my ideas of what music is
[+] [-] thomasahle|9 years ago|reply
I recently had a discussion about this with a musician. I said that I didn't like when music was produced (and certainly lyrics written by) somebody else than the performer. I said it took away from the experience of 'getting to know' the person I was listening to.
She basically replied, that I was being extremely old fashioned, and that this 'idea' of music was very harmful for the business. She said it prevented people from working together and each contributing what they did best.
If she's right, I guess we just have to interpret the music on its own, and not see it as a mental state of some individual creator. Maybe this is related to, when authors are annoyed that people identify them with their main characters. In any case, non-individual art doesn't seem to be going away.
[+] [-] ElHacker|9 years ago|reply
It would be very interesting to create a secret fake person profile and publish AI generated music under that fake name and see if humans feel identified with the lyrics and music. See how far it goes, who knows, maybe create the next super star. Then to later reveal the secret.
[+] [-] maroonblazer|9 years ago|reply
For example, I often write pieces of music where I can pinpoint very specific sections where the harmonic, melodic or rhythmic choices (separately or in any combination) sound trite or boring/predictable. I'd love to be able to feed the entire song into a machine, point out those specific trouble spots to it, and have it generate alternatives for just those sections, perhaps on a spectrum from 'not-too-far-from-the-rest-of-what-you've-written' (harmonically/melodically/rhythmically) on one end to 'way-out-there' on the other.
Even if none of the output on its own was usable it would still have value in stimulating my imagination with ideas that otherwise wouldn't have occurred to me and that I could build on or refine.
[+] [-] transpy|9 years ago|reply
I agree with the sentiment. But from a perceptual perspective, it's getting harder and harder to distinguish what is synthetic and what is organic. The trumpets, strings, and keys you hear on a song you like on the radio? It's very probable that they weren't played by humans: they might have been programmed and the sounds come from a massive, hyper sampled sound bank.
If we did an experiment, and recruited even expert musicians, and we asked them to classify sound clips in 'played/programmed' categories, I bet they wouldn't get them right.
David Cope's experiments showed that: https://www.theguardian.com/technology/2010/jul/11/david-cop...
[+] [-] stepvhen|9 years ago|reply
[+] [-] moogly|9 years ago|reply
[+] [-] alimw|9 years ago|reply
[+] [-] aaronwidd|9 years ago|reply
Currently to put music in something you need to license it somehow, in most cases the generic music you hear in most media is sourced from a company that has a library of royalty free songs that they sell for a pretty hefty license fee.
Imagine being able to instead go into some software and punch in some keywords to describe the scene, the duration, track what happens in the scene on a timeline of some sort and let the computer render original music for the score. No middle men, no musicians, no royalties or licenses other than for the software. It would change the industry overnight.. and make being a professional musician even less promising of a career choice than it is today...
[+] [-] shams93|9 years ago|reply
[+] [-] ashleyblackmore|9 years ago|reply
Sure, the algorithms, once sufficiently advanced, could probably trick us into thinking that certain examples of generative music were made by a person and then later reveal its algorithmic origin to prove that "the humans are stupid" and "the google algorithms are clever" but what are we actually proving here?
Can a computer devise new artistic forms that have some genuine impact on people - can a computer come up with Bacon's Triptych of George Dyer outside of regurgitating fragments of what it already has seen? What do we get out of a computer aping the alcohol-fuelled sweaty anarchic performances of The Black Lips?
The interesting stuff will be to see if this goes to other places that music has not yet gone - some new composition method - manipulation of frequency in ways that humans have not yet devised.
[+] [-] alive2007|9 years ago|reply
In a way, Magenta's job is not besting Bach. By the definition of Bach (a human being who changes the way we view and enjoy music), a non-human being cannot best Bach. Magenta's job is besting a much simpler, if equally challenging role - Max Martin, or the writers of "Let it Go".
As it turns out, this kind of music is already pretty formulaic. Much has been written on repetitive chord progressions being spammed across hundreds of famous singles. In a way, artists shouldn't fear the potential of these technologies besting them - they should thank them.
Freed now are artists from loading their albums with eye-rollingly generic lead singles that they immediately get sick of ("Stairway to Heaven", "Creep", "Smells Like Teen Spirit") because record labels know that's what will get the most radio play. You can just let the machine do those. Now, an artists' reputation is determined purely by his relative mettle against other human artists.
[+] [-] onion2k|9 years ago|reply
There are plenty of times and places where people want high quality "music" but don't want to actually engage with it on any level - the music that tells you you're still connected when you're waiting for a conference call, the low volume background music in some retail environments, the music in a lift. If "pleasant musical noise" could be generated automatically and to a sufficient quality I think there'd be a pretty decent market for it.
[+] [-] ENGNR|9 years ago|reply
Being able to add to the song with commands like 'add a psytrance bass line', even within predefined parameters, to dynamically generate an entirely new bass line from other songs in the genre.
Maybe you could instantly add an improvised violin melody by telling it a style given that the chords/key from the human band are consistent.
Sentiment analysers could tweak the music based on crowd reaction towards musician defined goals and learn those pre-sets over time.
If music is a synchronisation layer between humans, maybe machine learning could help us to synchronize even more closely.
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] shams93|9 years ago|reply
[+] [-] zamalek|9 years ago|reply
This has been done with LSTM[1][2]. Impressively, the NN is generating waveforms and not MIDI notes - at around 3:50 it even attempts some singing.
[1]: https://www.youtube.com/watch?v=0VTI1BBLydE [2]: https://github.com/MattVitelli/GRUV
[+] [-] fizx|9 years ago|reply
[+] [-] julianpye|9 years ago|reply
But the masterpieces of music demand something altogether different. Beethoven's break with consistency by switching keys in the adagio of the 5th piano concerto. Wagner introducing the Tristan chord, Berg using C-major only as a joke in Lulu, when the word money is mentioned.
Add to that ingenuity the personal drama behind music. Bach's crisis of faith leading to 'What God does is well done' to Mahler losing so many of his children that he composed the Kindertotenlieder ("Songs on the Death of Children") to the origins of a simple pop song such as 'Tears in heaven' that moves people tremendously... Music is shaped by our biological life cycle, not by that of a computer program.
[+] [-] aclissold|9 years ago|reply
“I placed myself in the situation that a child of mine had died. When I really lost my daughter, I could not have written these songs any more.”
[0]: https://en.wikipedia.org/wiki/Kindertotenlieder
[+] [-] TeMPOraL|9 years ago|reply
Look at the examples you've provided. They gain additional meaning because of the context information you've given. When dealing with art, people like to wonder, what the author thought? How did they feel? People try to connect. With machines, they know there is no human to connect to on the other side, so the work will be considered inferior.
[+] [-] anilgulecha|9 years ago|reply
[+] [-] reqctomaniac|9 years ago|reply
[+] [-] hellofunk|9 years ago|reply
I think he lost 1 child, not many. And it was a few years after finishing Kindertotenlieder.
[+] [-] danvoell|9 years ago|reply
[+] [-] zitterbewegung|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] samfisher83|9 years ago|reply
[+] [-] ssalazar|9 years ago|reply
[+] [-] 6stringmerc|9 years ago|reply
[1] https://www.youtube.com/watch?v=KoIcewM8sKY
[+] [-] samfisher83|9 years ago|reply
https://www.youtube.com/watch?v=QpB_40hYjXU
[+] [-] wslh|9 years ago|reply
[+] [-] Razengan|9 years ago|reply
EDIT: And then the inevitable schism among fans about what their base personalities should have been like, and the resulting clones, throwing the AIs into an existential crisis when confronted with their alternative versions..
[+] [-] tunesmith|9 years ago|reply
Ultimately I don't think this is a very worthwhile because I personally believe the entire definition of art and music is a production that is filtered through the human experience. The same piece of music would mean more to me coming from a human than from an AI or program. If someone told me it was from a human and then later told me it actually came from an AI, it wouldn't really accomplish anything deep; it'd just make me feel tricked.
[+] [-] vintermann|9 years ago|reply
Isn't it OK to let ourselves be moved by its love letters to humanity?
Every Monday, there are hundreds of tweets from Spotify users who declare that the Discover Weekly algorithm understands them better than anyone, that they want to marry it etc. (This is a remarkable testament to neural network AIs. Not many people want to marry Netflix or Amazon's recommender system, to put it like that!)
Of course that only finds music for you, it doesn't create it from scratch. But given the immense size of the library it searches through, that's impressive enough that many really get the feeling that it knows you, understands you. I hope generative algorithms will be that good one day, and I won't waste time worrying that it's fake once they are.
[+] [-] prawn|9 years ago|reply
[+] [-] larme|9 years ago|reply
[+] [-] meggar|9 years ago|reply
[+] [-] 6stringmerc|9 years ago|reply
https://cdn2.vox-cdn.com/uploads/chorus_asset/file/6577761/G...
I had a listen yesterday. It's okay. Nothing revolutionary. Competent though.
[+] [-] maroonblazer|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] simplexion|9 years ago|reply
[+] [-] annalewiss|9 years ago|reply
[+] [-] hyperbovine|9 years ago|reply
[+] [-] vthallam|9 years ago|reply
[+] [-] kuschku|9 years ago|reply
T-Mobile holds the trademarks for T-anything, Magenta, the color Magenta, etc for not just communication and networking, but also music etc.
[+] [-] zer00eyz|9 years ago|reply
Can code generate a catchy pop tune? Im sure it can. Would I qualify it as music? Probably not.
Music can pull from sources that are EXTERNAL to the music: Heres a modern track that does just such a thing: https://www.youtube.com/watch?v=xvtNS6hbVy4
Music can have some very non traditional structure that most would qualify as "noise" https://www.youtube.com/watch?v=zYaWND9n9h0
Hell music can even be "silent" http://rosewhitemusic.com/piano/writings/silence-taught-john...
Ultimately, I think that it is possible for a machine to generate music, but were going to be talking about something that is more or less an AI at that point, after all music has a soul.
[+] [-] philhartmanonic|9 years ago|reply
If instead you define art as the communication of things that can't be fully expressed by direct sterile language, that's something I can get behind. Going that route, I think that getting AI to create real art has some pretty clear challenges, but there are also ways whers I could see it being better than humans.
Depending on how the AI learns and analyzes it has a chance to have a unique perspective on human communication. From there, it can find new and innovative ways of identifying gaps between primal human experiences and sterile human communication - and with a unique understanding of human communication could come up with fascinating ways of bridging those gaps.
This is all spitballing, but I think AI could eventually be the main new frontier in art.
[+] [-] jonknee|9 years ago|reply
[+] [-] fizx|9 years ago|reply
Meaning in music can come from the creator or the listener.