top | item 27368319

(no title)

FrereKhan | 4 years ago

It's not quite correct to say this is only for achieving deep learning. Gradient-based parameter optimisation is still a useful tool, even for small shallow networks that would be ideal for event-based signal processing.

Even for small-network tasks, training spiking networks has been non-trivial. This paper provides a way to get exact gradients, implying probably faster optimisation than using surrogate gradients or other approximation methods for SNNs.

discuss

order

periheli0n|4 years ago

You are totally right. The algorithm itself is a potential game-changer. I guess I was carried away by the pitch in the abstract that starts off with deep learning.

Personally I think that way too many resources were wasted on trying to make better deep networks with spikes. In my opinion it is much more promising to apply spiking networks on problems that are inherently event-based.

Having a functional backpropagation algorithm such as the one provided can help with that, obviously.

datameta|4 years ago

Based on reading just the abstract so far, it seems to me the event-based application of this algorithm makes absolute sense. Temporal importance can be effectively characterized in memristors. At the risk of making a comparison similar to that of Andrew Ng's a decade ago, I think this approach paired with something like a ReRAM crossbar is quite an effective rough analogue to the voltage potentials across a group of neurons in the brain.

I applaud this team's efforts. A real breakthrough.