> The search for artificial intelligence modelled on human brains has been a dismal failure.
No, it hasn't. There have been huge strides made in artificial neural networks in the last decade. One example is the HyperNEAT algorithm [1], which uses an indirect encoding enabling it to evolve networks with millions of connections. There's an entire conference on Neural Information Processing Systems (NIPS), which is considered one of the most prestigious publication venues in AI.
This article is complete garbage. Ant colony optimization has been around for decades. It's great for routing and similar tasks where you need to find the best path and be able to handle breakdowns in that path. However, there is no basis for making the leap that human brains function like ant colonies.
To disagree on a side issue, it's been a long time since NIPS dealt much with classic ANNs (which never had much to do with human brains, in any case). Most of the action there, as in AI and ML at large these days---eg, the focus also at ICML and at other venues---is in statistical methods.
(On the other hand, neuroscience and explicitly biological neural modeling are exciting areas, reasonably well-represented at NIPS. Those topics, however, are almost entirely different from neural networks of the multilayer perceptron / [Hyper]NEAT varieties.)
But your criticism of the article seems accurate. ACO isn't new, and there's little evidence that it will solve any of the major outstanding problems in AI.
Less generously, however, I'd suggest that much of the research related to the family of population-based stochastic search methods, ACO, PSO, and HyperNEAT included, is prone to the same risk: the lack of a field-wide theoretical foundation, coupled with the absence of a field-wide standard methodology and benchmark set for empirical comparison (as opposed to, say, the situation in supervised learning), makes it temptingly easy for a particular researcher to believe too strongly in the capabilities of that researcher's pet algorithm. This situation seems to have balkanized the field (page through a recent GECCO proceedings, for example), and holds back wider progress.
That's not to say that HyperNEAT can't do great things. It's a fun approach, and Stanley et al are running far with it. But your boosterism of it, and the boosterism of ACO that you're objecting to, seem closely related.
(For contrast, I'd suggest, eg, the natural gradient work at IDSIA. It's unlikely to be the ultimate method, but may be a good model for solid research in this area.)
I agree -- artificial intelligence modeled on human brains hasn't been explored nearly enough to label it a failure. In addition to advances in artificial neural networks, there has been a lot of progress in more realistic models of the brain (specifically the neocortex). Artificial neural networks started the interest in "brain-like" modeling, but for a while, most neural networks were about as "brain-like" as a 100 component circuit is "computer-like"$ But unlike classical ANNs, new research such as HyperNEAT is exciting because it might reveal the mechanisms by which the wiring of the brain evolved into a structure capable of carrying out robust learning mechanisms.
Equally exciting is the research going on in figuring out the specifics of the learning mechanisms themselves. Research groups like the Redwood Center for Theoretical Neuroscience in Berkeley and the Center for Biological and Computational Learning at MIT (and many others) are making progress in understanding neocortical function and companies such as Numenta$$ are developing cortical learning algorithms based on specific mechanisms observed in the neocortex.
$ This example is from Jeff Hawkins's book "On Intelligence"
Thank you, thank you, thank you for posting a link to all of these papers. There are so * many * cool * papers. I'm now fully absorbed learning about genetic algorithms searching for novelty instead of the direct objective. Fascinating!
>In particular, Dr Dorigo was interested to learn that ants are good at choosing the shortest possible route between a food source and their nest. This is reminiscent of a classic computational conundrum, the travelling-salesman problem.
I'm not sure if you truncated that excerpt on purpose or not, but the sentence that follows it indicates that the author does actually know what the TSP is.
> In particular, Dr Dorigo was interested to learn that ants are good at choosing the shortest possible route between a food source and their nest. This is reminiscent of a classic computational conundrum, the travelling-salesman problem. Given a list of cities and their distances apart, the salesman must find the shortest route needed to visit each city once.
Very interesting article. The Hard AI problem is one of the new frontiers of science.
If you're interested in this topic, I highly suggest you check out the Radiolab episode on emergence.
The episode doesn't focus on AI, per se, but it does talk a lot about how many individual things (ants, fireflies, bees) are not intelligent on their own, but do appear intelligent as a collective whole.
[+] [-] tansey|15 years ago|reply
No, it hasn't. There have been huge strides made in artificial neural networks in the last decade. One example is the HyperNEAT algorithm [1], which uses an indirect encoding enabling it to evolve networks with millions of connections. There's an entire conference on Neural Information Processing Systems (NIPS), which is considered one of the most prestigious publication venues in AI.
This article is complete garbage. Ant colony optimization has been around for decades. It's great for routing and similar tasks where you need to find the best path and be able to handle breakdowns in that path. However, there is no basis for making the leap that human brains function like ant colonies.
[1] Stanley et. al. A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks. In: Artificial Life journal. Cambridge, MA: MIT Press, 2009. http://eplex.cs.ucf.edu/publications/2009/stanley.alife09.ht...
[+] [-] another|15 years ago|reply
(On the other hand, neuroscience and explicitly biological neural modeling are exciting areas, reasonably well-represented at NIPS. Those topics, however, are almost entirely different from neural networks of the multilayer perceptron / [Hyper]NEAT varieties.)
But your criticism of the article seems accurate. ACO isn't new, and there's little evidence that it will solve any of the major outstanding problems in AI.
Less generously, however, I'd suggest that much of the research related to the family of population-based stochastic search methods, ACO, PSO, and HyperNEAT included, is prone to the same risk: the lack of a field-wide theoretical foundation, coupled with the absence of a field-wide standard methodology and benchmark set for empirical comparison (as opposed to, say, the situation in supervised learning), makes it temptingly easy for a particular researcher to believe too strongly in the capabilities of that researcher's pet algorithm. This situation seems to have balkanized the field (page through a recent GECCO proceedings, for example), and holds back wider progress.
That's not to say that HyperNEAT can't do great things. It's a fun approach, and Stanley et al are running far with it. But your boosterism of it, and the boosterism of ACO that you're objecting to, seem closely related.
(For contrast, I'd suggest, eg, the natural gradient work at IDSIA. It's unlikely to be the ultimate method, but may be a good model for solid research in this area.)
[+] [-] snikolov|15 years ago|reply
Equally exciting is the research going on in figuring out the specifics of the learning mechanisms themselves. Research groups like the Redwood Center for Theoretical Neuroscience in Berkeley and the Center for Biological and Computational Learning at MIT (and many others) are making progress in understanding neocortical function and companies such as Numenta$$ are developing cortical learning algorithms based on specific mechanisms observed in the neocortex.
$ This example is from Jeff Hawkins's book "On Intelligence"
$$ Disclosure - I am working there this summer.
[+] [-] SMrF|15 years ago|reply
http://eplex.cs.ucf.edu/papers/lehman_gecco10b.pdf
[+] [-] retube|15 years ago|reply
No more an illusion than the individual intelligence/consciousness we humans experience.
[+] [-] endtime|15 years ago|reply
Ugh.
[+] [-] lylejohnson|15 years ago|reply
> In particular, Dr Dorigo was interested to learn that ants are good at choosing the shortest possible route between a food source and their nest. This is reminiscent of a classic computational conundrum, the travelling-salesman problem. Given a list of cities and their distances apart, the salesman must find the shortest route needed to visit each city once.
[+] [-] grg|15 years ago|reply
If you're interested in this topic, I highly suggest you check out the Radiolab episode on emergence.
The episode doesn't focus on AI, per se, but it does talk a lot about how many individual things (ants, fireflies, bees) are not intelligent on their own, but do appear intelligent as a collective whole.
Here's the link: http://www.wnyc.org/shows/radiolab/episodes/2005/02/18
[+] [-] kgosser|15 years ago|reply
Good article though.
[+] [-] mirkules|15 years ago|reply
http://www.amazon.com/Intelligence-Morgan-Kaufmann-Evolution...
(I'm not affiliated with the book in any way :)
[+] [-] tswicegood|15 years ago|reply
[+] [-] masterponomo|15 years ago|reply
[+] [-] presidentender|15 years ago|reply