They were very in vogue at the time. This was just after backprop was coming into its own, and before ANNs totally were surpassed by SVMs, boosting and ensembles, etc.
This was just before the second AI winter. It involved neural networks, prolog, lisp, fuzzy logic, Japan overtaking US in AI, etc.
Lots of good work with neural networks was done back then:
A learning algorithm for Boltzmann machines
DH Ackley, GE Hinton, TJ Sejnowski - Cognitive science, 1985
Learning representations by back-propagating errors
DE Rumelhart, GE Hinton, RJ Williams - nature, 1986
Phoneme recognition using time-delay neural networks
A Waibel, T Hanazawa, G Hinton, K Shikano, KJ Lang - Readings
in speech recognition, 1990
As all the other responses point out, NNs were red hot back then.
The interest in NNs was ignited (in part) by this double volume collection of essays called "Parallel Distributed Processing" edited by Rumelhart and McClelland.
Dean even cites them. And, if you read the contributors, it contains many (though not all) of the heavy hitters.
Reading back on it, it will sound very familiar. All the amazing breakthroughs: object recognition, handwriting recognition etc all seemed to be there. But all that rapid progress just seemed to stop. There was this quantum leap and then you were back to grinding out for even 0.1% improvement.
For those who stuck through the second winter, things obviously paid off.
From my perspective neural networks were a big thing in the late 1980s when I was on a DARPA neural networks tools panel for a year, and wrote the initial version of the SAIC Ansim neural network project. We had some great results using simple backdrop networks. Good times.
My PhD, which I completed in 1992, was about improving back propagation in neural networks. Neural networks were going through an initial phase of excitement caused by the Rumelhart and McLeland book. My dissertation was on modularizing NNs. https://surface.syr.edu/cgi/viewcontent.cgi?article=1130&con...
The early 90s were an interesting time for NNs and other machine learning systems. I remember getting really interested, but being told that "NNs with more than 1 layer can't really be trained", so I went into simulation rather than training. It's really great that GPUs and deep backprop arose to recover the stature of NNs.
Not that incredible. Just about every CS / Psych / Cognitive Science Dept back then was into them. I did a project on NNs in my undergrad. Programmed in C. I’m sure thousands of others did as well.
totoglazer|7 years ago
nabla9|7 years ago
Lots of good work with neural networks was done back then:
projectramo|7 years ago
The interest in NNs was ignited (in part) by this double volume collection of essays called "Parallel Distributed Processing" edited by Rumelhart and McClelland.
Dean even cites them. And, if you read the contributors, it contains many (though not all) of the heavy hitters.
Reading back on it, it will sound very familiar. All the amazing breakthroughs: object recognition, handwriting recognition etc all seemed to be there. But all that rapid progress just seemed to stop. There was this quantum leap and then you were back to grinding out for even 0.1% improvement.
For those who stuck through the second winter, things obviously paid off.
The intro essay is online:
https://stanford.edu/~jlmcc/papers/PDP/Chapter1.pdf
mark_l_watson|7 years ago
pimmen|7 years ago
Then when the data explosion started during the 00s, it laid the groundwork for the NN comeback.
coldsauce|7 years ago
2sk21|7 years ago
mcilai|7 years ago
unknown|7 years ago
[deleted]
silverlake|7 years ago
dekhn|7 years ago
plg|7 years ago