When using evolutionary pressures to guide software, you have to remember that your stated goal (via fitness functions) often does not truly reflect your intended goal.
For example, when I was using genetic algorithms to pick stock trades, I tried to maximize total_net_gain_loss / number_of_trades. The GA quickly figured out that 0 trades was the best answer. In hindsight, Duh! But I wanted to make trades!
That is one kind of problem. The 'Genie' problem: be very specific what you wish for because the genie will misinterpret you.
But it's not the only kind of problem. There are inherent biases in evolutionary systems. One example: in systems with varying length genotypes there is a massive pressure to bloat. Even if you encode 'small' as a strong requirement in your fitness function, it may not be enough (or it may be enough to completely defeat whatever your real goal was).
There are inherent biases. In the (simulated) genetics, in the genotype to phenotype mapping, in the evolutionary operators, even if you get the fitness function right.
Evolutionary computing as an engineering tool is hard.
My first thought: Why not just use max gain? of course that would leas to catastrophic failure over short or long. Second thought: Why not both? Just like speed is a derivative, so perhaps you want a set of differential equations, also gain per square time, too.
On another note, "not to play" should be a zero-division error.
The article's philosophical discussion of "surprise" feels a little klunky and academic to me, and I think conflates some different kinds of surprise, but it's still really fun to think about.
Many of the projects here are all inspired by Karl Sims' work in the early 90's.
Sims evolved virtual creates by specifying goals to achieve and then running a physics simulation. He noted at the time that the evolution process was great at exploiting bugs in the simulation.
Nice work! Your inspired-by-Sims collection is the best I've seen in that genre, I think. I always look when someone links to their genetic images because I once made some too: https://djb.deviantart.com/gallery/ (They were less garish at the original gamma setting, that's how long ago it was.)
It always felt like there's more potential in this direction. Maybe with a magic sprinkling of neural nets?
Off topic - I find it humorous that arXiv list 49 authors and then ends the listed author list with:
"et al. (1 additional author not shown)"
I know a line has got to be drawn somewhere but in this case, listing the last author (Jason Yosinksi) would have taken less space than the explanation that not all the authors are shown.
Finding where your simulation model breaks down is one of two great features of evolutionary optimization. The other is its ability to preserve a population of approximate solutions rather than finding only a single optimized solution.
The article claims that certain popular dance songs from the early 2000s were in part generated using evolutionary algorithms. Does anyone know which songs they might be?
[+] [-] jeffclune|8 years ago|reply
Here are two fun gifs of clever solutions AI came up with:
https://twitter.com/jeffclune/status/974718199722795008
https://twitter.com/jeffclune/status/973605950266331138
Here are some press articles which provide a shorter summary of some fun anecdotes.
popularmechanics.com/technology/robots/a19445627/the-hilarious-and-terrifying-ways-algorithms-have-outsmarted-their-creators/
https://www.newscientist.com/article/8-8-hilarious-ways-ai-h... (paywalled unfortunately)
[+] [-] mring33621|8 years ago|reply
For example, when I was using genetic algorithms to pick stock trades, I tried to maximize total_net_gain_loss / number_of_trades. The GA quickly figured out that 0 trades was the best answer. In hindsight, Duh! But I wanted to make trades!
"The only winning move is not to play"
[+] [-] sago|8 years ago|reply
But it's not the only kind of problem. There are inherent biases in evolutionary systems. One example: in systems with varying length genotypes there is a massive pressure to bloat. Even if you encode 'small' as a strong requirement in your fitness function, it may not be enough (or it may be enough to completely defeat whatever your real goal was).
There are inherent biases. In the (simulated) genetics, in the genotype to phenotype mapping, in the evolutionary operators, even if you get the fitness function right.
Evolutionary computing as an engineering tool is hard.
[+] [-] allthenews|8 years ago|reply
[+] [-] posterboy|8 years ago|reply
On another note, "not to play" should be a zero-division error.
[+] [-] dahart|8 years ago|reply
Many of the projects here are all inspired by Karl Sims' work in the early 90's.
http://www.karlsims.com/
Sims evolved virtual creates by specifying goals to achieve and then running a physics simulation. He noted at the time that the evolution process was great at exploiting bugs in the simulation.
https://www.youtube.com/watch?v=JBgG_VSP7f8
I was insipred enough by Sims' genetic images (http://www.karlsims.com/genetic-images.html) that I spent a few years trying to get surprising and beautiful results of my own, with some limited success (https://flic.kr/s/3Xoz).
[+] [-] abecedarius|8 years ago|reply
It always felt like there's more potential in this direction. Maybe with a magic sprinkling of neural nets?
[+] [-] VyseofArcadia|8 years ago|reply
To me it sounded like something a referee would request or something they suspected a referee would request.
[+] [-] rripken|8 years ago|reply
I know a line has got to be drawn somewhere but in this case, listing the last author (Jason Yosinksi) would have taken less space than the explanation that not all the authors are shown.
[+] [-] sevensor|8 years ago|reply
[+] [-] 323454|8 years ago|reply
[+] [-] jcims|8 years ago|reply
I don't know why, but after ten years it remains one of my favorite stories on the Internet. Worth a read IMHO.