top | item 18418667

Making floating point math highly efficient for AI hardware

152 points| probdist | 7 years ago |code.fb.com

32 comments

order

grandmczeb|7 years ago

Here's the bottom line for anyone who doesn't want to read the whole article.

> Using a commercially available 28-nanometer ASIC process technology, we have profiled (8, 1, 5, 5, 7) log ELMA as 0.96x the power of int8/32 multiply-add for a standalone processing element (PE).

> Extended to 16 bits this method uses 0.59x the power and 0.68x the area of IEEE 754 half-precision FMA

In other words, interesting but not earth shattering. Great to see people working in this area though!

jhj|7 years ago

At least 69% more multiply-add flops at the same power iso-process is nothing to sneeze at (we're largely power/heat bound at this point), and unlike normal floating point (IEEE or posit or whatever), multiplication, division/inverse and square root are more or less free power, area and latency-wise. This is not a pure LNS or pure floating point because it is a hybrid of "linear" floating point (FP being itself hybrid log/linear, but the significand is linear) and LNS log representations for the summation.

Latency is also a lot less than IEEE or posit floating point FMA (not in the paper, but the results were only at 500 MHz because the float FMA couldn't meet timing closure at 750 MHz or higher in a single cycle, and the paper had to be pretty short with a deadline, so couldn't explore the whole frontier and show 1 cycle vs 2 cycle vs N cycle pipelined implementations).

The floating point tapering trick applied on top of this can help with the primary chip power problem, which is moving bits around, so you can solve more problems with a smaller word size because your encoding matches your data distribution better. Posits are a partial but not complete answer to this problem if you are willing to spend more area/energy on the encoding/decoding (I have a short mention about a learned encoding on this matter).

A floating point implementation that is more efficient than typical integer math but in which one can still do lots of interesting work is very useful too (providing an alternative for cases where you are tempted to use a wider bit width fixed point representation for dynamic range, or a 16+ bit floating point format).

TheRealPomax|7 years ago

Wait, 0.59x isn't Earth shattering? That's almost half the power, and at only 2/3 the area. Those are _huge_ differences at data center scale!

jacquesm|7 years ago

You are wrong that this is not 'earth shattering', 40% efficiency increases are roughly what you'd get from a process node step, given that those are rather few and far between now this is the equivalent of extending Moore's law by another 5 to 10 years.

dnautics|7 years ago

That's for the actual number crunching but the real power cost is often in bandwidth (as discussed earlier in the op). If you can reliably use lower precision stuff for training, you get 4x the flops for a halving of the bandwidth costs due to matrix mult being O(n^2)

_yosefk|7 years ago

They don't show a comparison to bfloat16 PEs/FMA. IEEE half precision uses a larger mantissa than bfloat16, and the cost of multiplication is proportionate to the square of the mantissa size. I'd expect much lower gains relatively to bfloat16

moflome|7 years ago

Not sure why this isn't getting more votes, but it's a good avenue of research and the authors should be commended. That said, this approach to optimizing floating point implementations has a lot of history at Imagination Technologies, ARM and similar low-power inferencing chipsets providers. I especially like the Synopsys ASIP Design [0] tool which leverages the open-source (although not yet IEEE ratified) LISA 2.0 Architecture Design Language [1] to iterate on these design issues.

Interesting times...

[0] https://www.synopsys.com/dw/ipdir.php?ds=asip-designer [1] https://en.wikipedia.org/wiki/LISA_(Language_for_Instruction...

Geee|7 years ago

A bit off-topic, but I remember some studies about 'under-powered' ASICs, ie. running with 'lower-than-required' voltage and just letting the chip fail sometimes. I guess the outcome was that you can run with 0.1x power and get 0.9x of correctness. Usually chips are designed so that they never fail and that requires using substantially more energy than is needed in the average case. If the application is probabilistic or noisy in general, additional 'computation noise' could be allowed for better energy efficiency.

david-gpu|7 years ago

That sounds awful for verification, debugging, reproducibility and safety-critical systems. Imagine this in a self-driving car. Scary.

dnautics|7 years ago

Wow! It's kind of a wierd feeling to see some research I worked on get some traction in the real world!! The ELMA lookup problem for 32 bit could be fixed by using the posit standard, which just has "simple" adders for the section past the golomb encoded section, though you may have to worry about spending transistors on the barrel shifter.

jhj|7 years ago

The ELMA LUT problem is in the log -> linear approximation to perform sums in the linear domain. This avoids the issue that LNS implementations have had in the past, which is in trying to keep the sum in the log domain, requiring an even bigger LUT or piecewise approximation of the sum and difference non-linear functions.

This is independent of any kind of posit or other encoding issue (i.e. it has nothing to do with posits).

(I'm the author)

sgt101|7 years ago

For those interested the general area I saw a good talk about representing and manipulating floating point numbers in Julia at CSAIL last week by Jiahao Chen. The code with some good documentation is on his github.

https://github.com/jiahao/ArbRadixFloatingPoints.jl

davmar|7 years ago

caveat: i haven't finished reading the entire FB announcement yet.

google announced something along these lines at their AI conference last september and released the video today on youtube. here's the link to the segment where their approach is discussed: https://www.youtube.com/watch?v=ot4RWfGTtOg&t=330s

moltensyntax|7 years ago

> Significands are fixed point, and fixed point adders, multipliers, and dividers on these are needed for arithmetic operations... Hardware multipliers and dividers are usually much more resource-intensive

It's been a number of years since I've implemented low-level arithmetic, but when you use fixed point, don't you usually choose a power of 2? I don't see why you'd need multiplication/division instead of bit shifters.

jhj|7 years ago

Multiplication or division by a power of 2 can be done by bit shift assuming binary numbers represent base-2 numbers; i.e. not a beta-expansion https://en.wikipedia.org/wiki/Non-integer_representation where binary numbers are base-1.5 or base-sqrt(2) or base-(pi-2) or whatever (in which case multiplication or division by powers of 1.5 or sqrt(2) or (pi-2) could be done via bit shift).

But when multiplying two arbitrary floating point numbers, your typical case is multiplying base-2 numbers not powers of 2, like 1.01110110 by 1.10010101, which requires a real multiplier.

General floating point addition, multiplication and division thus require fixed-point adders, multipliers and dividers on the significands.

repsilat|7 years ago

Fixed point might usually put the "binary point" in between bits, but when doing a multiply between two of them you still have to do at least an integer multiply before the bit shift. Ditto division.

saagarjha|7 years ago

I find it interesting that they were able to find improvements even on hardware that is presumably optimized for IEEE-754 floating point numbers.

nestorD|7 years ago

It is a trade-of : they find improvements by losing precision where they believe it is not useful for their use case.