This paper was extremely devoid of content beyond a basic summary of what neural nets are, what an FPGA is, some basics about both (forward propagation, back propagation, usage of HDL modules, usage of LUTs), and a statement that the author created a thing.
As a machine learning guy and total closet FPGA geek, this was sort of a disappointment. I would have liked to see topics addressed like floating point precision, actual benchmarks, how the FPGA can pipeline things better than best known CPU or GPU algorithms due to a lack of pipeline stalls, issues with I/O of training data and predictions, and probably a discussion on LSTM gates or GRUs (which I think the FPGA is particularly suitable for).
Do you even want to use floats when talking about FPGA NNs?
You can talk directly about circuits on bitvectors, of which a subset looks like floats doing math, but your network might have a better encoding than that.
[+] [-] banachtarski|8 years ago|reply
As a machine learning guy and total closet FPGA geek, this was sort of a disappointment. I would have liked to see topics addressed like floating point precision, actual benchmarks, how the FPGA can pipeline things better than best known CPU or GPU algorithms due to a lack of pipeline stalls, issues with I/O of training data and predictions, and probably a discussion on LSTM gates or GRUs (which I think the FPGA is particularly suitable for).
[+] [-] marviel|8 years ago|reply
(Disclaimer --- I was a student working on this project)
[+] [-] SomeStupidPoint|8 years ago|reply
You can talk directly about circuits on bitvectors, of which a subset looks like floats doing math, but your network might have a better encoding than that.
[+] [-] borramakot|8 years ago|reply