top | item 25933723

If memristors act like neurons, put them in neural networks

83 points| hypomnemata | 5 years ago |spectrum.ieee.org | reply

39 comments

order
[+] theamk|5 years ago|reply
I am surprised people have high expectations from memristors. They are just another way to build an analog computer -- better for machine learning, worse for classical ODEs.

But we have not used analog computers for 50 years, and for a good reason -- they are not reproducible, their accuracy is very process dependent and has a hard upper limit, and they are often tuned for a single function.

Would people want a chip which is basically unpredictable -- the performance can vary up by tens of %, they have to be re-trained periodically to prevent data loss, and there is no way to load pre-trained network? I doubt it. Maybe there is an extremely narrow use case, but I do not see it in the mainstream devices.

[+] waste_monk|5 years ago|reply
>Would people want a chip which is basically unpredictable -- the performance can vary up by tens of %, they have to be re-trained periodically to prevent data loss, and there is no way to load pre-trained network? I doubt it. Maybe there is an extremely narrow use case, but I do not see it in the mainstream devices.

Human brains have the same problems and seem to be fairly popular.

[+] svantana|5 years ago|reply
> Would people want a chip which is basically unpredictable

Judging by the prevalence of (pseudo-) random numbers in machine learning, I'd say yes. Reentrancy is a big plus, but not always a dealbreaker.

It's possible we might end up with a sort of left/right-brain setup, with a noisy analog hypothesis generator paired with a robust, logic-based evaluator/planner.

[+] rkagerer|5 years ago|reply
I agree with all your points. Perhaps the opportunity is if your neural networks are tolerant to that kind of variance there may be applications where you can shrink the hardware footprint (and cost) dramatically while improving latencies (and maybe power usage).
[+] 3JPLW|5 years ago|reply
I encourage folks to actually read the linked article instead of basing their commentary on the shoddy title.

https://www.nature.com/articles/s41928-020-00523-3

Abstract:

> Resistive memory technologies could be used to create intelligent systems that learn locally at the edge. However, current approaches typically use learning algorithms that cannot be reconciled with the intrinsic non-idealities of resistive memory, particularly cycle-to-cycle variability. Here, we report a machine learning scheme that exploits memristor variability to implement Markov chain Monte Carlo sampling in a fabricated array of 16,384 devices configured as a Bayesian machine learning model. We apply the approach experimentally to carry out malignant tissue recognition and heart arrhythmia detection tasks, and, using a calibrated simulator, address the cartpole reinforcement learning task. Our approach demonstrates robustness to device degradation at ten million endurance cycles, and, based on circuit and system-level simulations, the total energy required to train the models is estimated to be on the order of microjoules, which is notably lower than in complementary metal– oxide–semiconductor (CMOS)-based approaches.

[+] peatmoss|5 years ago|reply
Memristors are the technology that seemed poised to usher in a new era of computing. The promise of being able to redraw the current computer architectural hierarchies is tantalizing.

If machine learning applications are what finally get memristors out into the world, I wish them godspeed.

[+] theamk|5 years ago|reply
What do you think of memristors vs FPGAs?

They have many similarities -- they both redrew the current computer architectures, they integrate memory and computing, they can have randomness built-in, and they both take less power than mainstream GPUs.

What does memristor provide that specially designed "neural FPGA" can not?

[+] YeGoblynQueenne|5 years ago|reply
>> The devices could also work well within neural networks, which are machine learning systems that use synthetic versions of synapses and neurons to mimic the process of learning in the human brain.

Yann LeCun disagrees:

IEEE Spectrum: We read about Deep Learning in the news a lot these days. What’s your least favorite definition of the term that you see in these stories?

Yann LeCun: My least favorite description is, “It works just like the brain.” I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldn’t deliver.

https://spectrum.ieee.org/automaton/artificial-intelligence/...

[+] dvh|5 years ago|reply
Digikey doesn't sell memristors
[+] Cyder|5 years ago|reply
The movie 'ex machina' is a great example of this discussion... must see
[+] nobodyandproud|5 years ago|reply
Basic question, but why are transistors not considered fundamental?