top | item 21165461

A Separate Kind of Intelligence

79 points| hos234 | 6 years ago |edge.org | reply

7 comments

order
[+] pipingdog|6 years ago|reply
> So, you start out with a system that’s very plastic but not very efficient, and that turns into a system that’s very efficient and not very plastic and flexible.

> It’s interesting that that isn’t an architecture that’s typically been used in AI.

No? That sounds exactly like training a model, then applying the trained model.

[+] ssivark|6 years ago|reply
Not really. A model (after training) continues to be as plastic as it was before. Correspondingly, it can be made to "forget" what it has "learned" and learn something else. [At least architecture-wise. Whether the learned parameter values reduces plasticity is a non-trivial statement that needs to be demonstrated]

If contemporary AI/ML models behaved like what Alison Gopnik says, then their tendency to overfit would be even more of a problem -- you couldn't even transfer them from simulation domain to reality -- since they would lose all their plasticity overfitting to simulation!

Also, this article contains lots of other interesting ideas to think about. Highly recommend reading all of it.

[+] IAmGraydon|6 years ago|reply
That’s exactly what I was thinking as I read this.
[+] ImaTigger|6 years ago|reply
Elman proposed (and I think built) a model in the mid 1990s (see his book: Rethinking Innateness) that works in exactly this manner: A "wave of growth" moves across an initially highly connected "cortical" network, where parts learn (parcel out), and then become fixated, as other, nearby parts learn. You end up with what amounts to the end-result of what would happen if you built a stack of deep-learned transducers with higher order concepts built in top of lower order ones.