Here's the bottom line for anyone who doesn't want to read the whole article.
> Using a commercially available 28-nanometer ASIC process technology, we have profiled (8, 1, 5, 5, 7) log ELMA as 0.96x the power of int8/32 multiply-add for a standalone processing element (PE).
> Extended to 16 bits this method uses 0.59x the power and 0.68x the area of IEEE 754 half-precision FMA
In other words, interesting but not earth shattering. Great to see people working in this area though!
At least 69% more multiply-add flops at the same power iso-process is nothing to sneeze at (we're largely power/heat bound at this point), and unlike normal floating point (IEEE or posit or whatever), multiplication, division/inverse and square root are more or less free power, area and latency-wise. This is not a pure LNS or pure floating point because it is a hybrid of "linear" floating point (FP being itself hybrid log/linear, but the significand is linear) and LNS log representations for the summation.
Latency is also a lot less than IEEE or posit floating point FMA (not in the paper, but the results were only at 500 MHz because the float FMA couldn't meet timing closure at 750 MHz or higher in a single cycle, and the paper had to be pretty short with a deadline, so couldn't explore the whole frontier and show 1 cycle vs 2 cycle vs N cycle pipelined implementations).
The floating point tapering trick applied on top of this can help with the primary chip power problem, which is moving bits around, so you can solve more problems with a smaller word size because your encoding matches your data distribution better. Posits are a partial but not complete answer to this problem if you are willing to spend more area/energy on the encoding/decoding (I have a short mention about a learned encoding on this matter).
A floating point implementation that is more efficient than typical integer math but in which one can still do lots of interesting work is very useful too (providing an alternative for cases where you are tempted to use a wider bit width fixed point representation for dynamic range, or a 16+ bit floating point format).
You are wrong that this is not 'earth shattering', 40% efficiency increases are roughly what you'd get from a process node step, given that those are rather few and far between now this is the equivalent of extending Moore's law by another 5 to 10 years.
That's for the actual number crunching but the real power cost is often in bandwidth (as discussed earlier in the op). If you can reliably use lower precision stuff for training, you get 4x the flops for a halving of the bandwidth costs due to matrix mult being O(n^2)
They don't show a comparison to bfloat16 PEs/FMA. IEEE half precision uses a larger mantissa than bfloat16, and the cost of multiplication is proportionate to the square of the mantissa size. I'd expect much lower gains relatively to bfloat16
Not sure why this isn't getting more votes, but it's a good avenue of research and the authors should be commended. That said, this approach to optimizing floating point implementations has a lot of history at Imagination Technologies, ARM and similar low-power inferencing chipsets providers. I especially like the Synopsys ASIP Design [0] tool which leverages the open-source (although not yet IEEE ratified) LISA 2.0 Architecture Design Language [1] to iterate on these design issues.
A bit off-topic, but I remember some studies about 'under-powered' ASICs, ie. running with 'lower-than-required' voltage and just letting the chip fail sometimes. I guess the outcome was that you can run with 0.1x power and get 0.9x of correctness. Usually chips are designed so that they never fail and that requires using substantially more energy than is needed in the average case. If the application is probabilistic or noisy in general, additional 'computation noise' could be allowed for better energy efficiency.
Wow! It's kind of a wierd feeling to see some research I worked on get some traction in the real world!! The ELMA lookup problem for 32 bit could be fixed by using the posit standard, which just has "simple" adders for the section past the golomb encoded section, though you may have to worry about spending transistors on the barrel shifter.
The ELMA LUT problem is in the log -> linear approximation to perform sums in the linear domain. This avoids the issue that LNS implementations have had in the past, which is in trying to keep the sum in the log domain, requiring an even bigger LUT or piecewise approximation of the sum and difference non-linear functions.
This is independent of any kind of posit or other encoding issue (i.e. it has nothing to do with posits).
For those interested the general area I saw a good talk about representing and manipulating floating point numbers in Julia at CSAIL last week by Jiahao Chen. The code with some good documentation is on his github.
caveat: i haven't finished reading the entire FB announcement yet.
google announced something along these lines at their AI conference last september and released the video today on youtube. here's the link to the segment where their approach is discussed:
https://www.youtube.com/watch?v=ot4RWfGTtOg&t=330s
> Significands are fixed point, and fixed point adders, multipliers, and dividers on these are needed for arithmetic operations... Hardware multipliers and dividers are usually much more resource-intensive
It's been a number of years since I've implemented low-level arithmetic, but when you use fixed point, don't you usually choose a power of 2? I don't see why you'd need multiplication/division instead of bit shifters.
Multiplication or division by a power of 2 can be done by bit shift assuming binary numbers represent base-2 numbers; i.e. not a beta-expansion https://en.wikipedia.org/wiki/Non-integer_representation where binary numbers are base-1.5 or base-sqrt(2) or base-(pi-2) or whatever (in which case multiplication or division by powers of 1.5 or sqrt(2) or (pi-2) could be done via bit shift).
But when multiplying two arbitrary floating point numbers, your typical case is multiplying base-2 numbers not powers of 2, like 1.01110110 by 1.10010101, which requires a real multiplier.
General floating point addition, multiplication and division thus require fixed-point adders, multipliers and dividers on the significands.
Fixed point might usually put the "binary point" in between bits, but when doing a multiply between two of them you still have to do at least an integer multiply before the bit shift. Ditto division.
grandmczeb|7 years ago
> Using a commercially available 28-nanometer ASIC process technology, we have profiled (8, 1, 5, 5, 7) log ELMA as 0.96x the power of int8/32 multiply-add for a standalone processing element (PE).
> Extended to 16 bits this method uses 0.59x the power and 0.68x the area of IEEE 754 half-precision FMA
In other words, interesting but not earth shattering. Great to see people working in this area though!
jhj|7 years ago
Latency is also a lot less than IEEE or posit floating point FMA (not in the paper, but the results were only at 500 MHz because the float FMA couldn't meet timing closure at 750 MHz or higher in a single cycle, and the paper had to be pretty short with a deadline, so couldn't explore the whole frontier and show 1 cycle vs 2 cycle vs N cycle pipelined implementations).
The floating point tapering trick applied on top of this can help with the primary chip power problem, which is moving bits around, so you can solve more problems with a smaller word size because your encoding matches your data distribution better. Posits are a partial but not complete answer to this problem if you are willing to spend more area/energy on the encoding/decoding (I have a short mention about a learned encoding on this matter).
A floating point implementation that is more efficient than typical integer math but in which one can still do lots of interesting work is very useful too (providing an alternative for cases where you are tempted to use a wider bit width fixed point representation for dynamic range, or a 16+ bit floating point format).
TheRealPomax|7 years ago
jacquesm|7 years ago
dnautics|7 years ago
_yosefk|7 years ago
moflome|7 years ago
Interesting times...
[0] https://www.synopsys.com/dw/ipdir.php?ds=asip-designer [1] https://en.wikipedia.org/wiki/LISA_(Language_for_Instruction...
Geee|7 years ago
david-gpu|7 years ago
dnautics|7 years ago
jhj|7 years ago
This is independent of any kind of posit or other encoding issue (i.e. it has nothing to do with posits).
(I'm the author)
sgt101|7 years ago
https://github.com/jiahao/ArbRadixFloatingPoints.jl
davmar|7 years ago
google announced something along these lines at their AI conference last september and released the video today on youtube. here's the link to the segment where their approach is discussed: https://www.youtube.com/watch?v=ot4RWfGTtOg&t=330s
moltensyntax|7 years ago
It's been a number of years since I've implemented low-level arithmetic, but when you use fixed point, don't you usually choose a power of 2? I don't see why you'd need multiplication/division instead of bit shifters.
jhj|7 years ago
But when multiplying two arbitrary floating point numbers, your typical case is multiplying base-2 numbers not powers of 2, like 1.01110110 by 1.10010101, which requires a real multiplier.
General floating point addition, multiplication and division thus require fixed-point adders, multipliers and dividers on the significands.
repsilat|7 years ago
saagarjha|7 years ago
nestorD|7 years ago