top | item 15749084

A General Neural Network Hardware Architecture on FPGA [pdf]

42 points| Katydid | 8 years ago |arxiv.org | reply

9 comments

order
[+] banachtarski|8 years ago|reply
This paper was extremely devoid of content beyond a basic summary of what neural nets are, what an FPGA is, some basics about both (forward propagation, back propagation, usage of HDL modules, usage of LUTs), and a statement that the author created a thing.

As a machine learning guy and total closet FPGA geek, this was sort of a disappointment. I would have liked to see topics addressed like floating point precision, actual benchmarks, how the FPGA can pipeline things better than best known CPU or GPU algorithms due to a lack of pipeline stalls, issues with I/O of training data and predictions, and probably a discussion on LSTM gates or GRUs (which I think the FPGA is particularly suitable for).

[+] SomeStupidPoint|8 years ago|reply
Do you even want to use floats when talking about FPGA NNs?

You can talk directly about circuits on bitvectors, of which a subset looks like floats doing math, but your network might have a better encoding than that.

[+] borramakot|8 years ago|reply
Are there performance numbers for this, on e.g. resnet-50?