Large neural networks spend most computation on floating point tensor multiplications. In this work, we find that a floating point multiplier can be approximated by one integer adder with high precision. We propose the linear-complexity multiplication (L-Mul) algorithm that approximates floating point number multiplication with integer addition operations. The new algorithm costs significantly less computation resource than 8-bit floating point multiplication but achieves higher precision. Compared to 8-bit floating point multiplications, the proposed method achieves higher precision but consumes significantly less bit-level computation. Since multiplying floating point numbers requires substantially higher energy compared to integer addition operations, applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by elementwise floating point tensor multiplications and 80% energy cost of dot products. We calculated the theoretical error expectation of L-Mul, and evaluated the algorithm on a wide range of textual, visual, and symbolic tasks, including natural language understanding, structural reasoning, mathematics, and commonsense question answering. Our numerical analysis experiments agree with the theoretical error estimation, which indicates that L-Mul with 4-bit mantissa achieves comparable precision as float8 e4m3 multiplications, and L-Mul with 3-bit mantissa outperforms float8 e5m2. Evaluation results on popular benchmarks show that directly applying L-Mul to the attention mechanism is almost lossless. We further show that replacing all floating point multiplications with 3-bit mantissa L-Mul in a transformer model achieves equivalent precision as using float8 e4m3 as accumulation precision in both fine-tuning and inference.
To me intuitively using floats to make ultimatelty boolean like decisions seems wasteful but that seemed like the way it had to be to have diffetentiable algorithms.
we used to use Fixed point multiplications (Q Format) in DSP algorithms on different DSP architectures. https://en.wikipedia.org/wiki/Q_(number_format). They used to be so fast and near accurate to floating point multiplications. Probably we need to use those DSPs blocks as part of Tensors/GPUs to realise both fast multiplications & parallelisms.
Then I used it for Llama-3.2-3B-Instruct.F16.gguf and it outputted jibberish. So you would probably have to train and design your model specifically to use this multiplication approximation in order for it to work. Or maybe I'd have to tune the model so that only certain layers and/or operations use the approximation. However the speed was decent. Prefill only dropped from 850 tokens per second to 200 tok/sec on my threadripper. Prediction speed was totally unaffected, staying at 34 tok/sec. I like how the code above generates vpternlog ops. So if anyone ever designs an LLM architecture and releases weights on Hugging Face that use this algorithm, we'll be able to run them reasonably fast without special hardware.
Your kernel seems to be incorrect for 1.75 * 2.5.
From paper we have 1.75 == (1+0.75)*2^0 for 2.5 == (1+0.25)*2^1 so result is
(1+0.75+0.25+2^-4)*2^1 == 4.125 (correct result is 4.375)
Extraordinary claims require extraordinary evidence.
Maybe it's possible, but consider that some really smart people, in many different groups, have been working diligently in this space for quite a while; so claims of 95% savings on energy costs _with equivalent performance_ is in the extraordinary category. Of course, we'll see when the tide goes out.
It is a click bait headline the claim itself is not extraordinary.
the preprint from arxiv was posted here some time back .
The 95% gains is specifically only for multiplication operations, inference is compute light and memory heavy in the first place so the actual gains would be far less smaller .
Tech journalism (all journalism really) can hardly be trusted to publish grounded news with the focus on clicks and revenue they need to survive.
I don't think this claim is extraordinary. Nothing proposed is mathematically impossible or even unlikely, just a pain in the ass to test (lots of retraining, fine tuning etc, and those operations are expensive when you dont have already massively parallel hardware available, otherwise you're ASIC/FPGAing for something with a huge investment risk)
If I could have a SWAG at it I would say a low resolution model like llama-2 would probably be just fine (llama-2 quantizes without too much headache) but a higher resolution model like llama-3 probably not so much, not without massive retraining anyways.
They’ve been working on unrelated problems like structure of the network or how to build networks with better results. There have been people working on improving the efficiency of the low-level math operations and this is the culmination of those groups. Figuring this stuff out isn’t super easy.
re: all above/below comments. It's still an extraordinary claim.
I'm not claiming it's not possible, nor am I claiming that it's not true, or, at least, honest.
But, there will need to be evidence that using real machines, and using real energy an _equivalent performance_ is achievable. A defense that "there are no suitable chips" is a bit disingenuous. If the 95% savings actually has legs some smart chip manufacturer will do the math and make the chips. If it's correct, that chip making firm will make a fortune. If it's not, they won't.
As someone who has worked in this space (approximate compute) on both GPUs and in silicon in my research, the power consumption claims are completely bogus, as are the accuracy claims:
> In this section, we show that L-Mul is more precise than fp8 e4m3 multiplications
> To be concise, we do not consider the rounding to nearest even mode in both error analysis and complexity estimation for both Mul and L-Mul
These two statements together are non-sensical. Sure, if you analyze accuracy while ignoring the part of the algorithm that gives you accuracy in the baseline you can derive whatever cherry-picked result you want.
The multiplication of two floating point values if you round to nearest even will be the correctly rounded result of multiplying the original values at infinite precision, this is how floating point rounding usually works and what IEEE 754 mandates for fundamental operations if you choose to follow those guidelines (e.g., multiplication here). But not rounding to nearest even will result in a lot more quantization noise, and biased noise at that too.
> applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by elementwise floating point tensor multiplications and 80% energy cost of dot products
A good chunk of the energy cost is simply moving data between memories (especially external DRAM/HBM/whatever) and along wires, buffering values in SRAMs and flip-flops and the like. Combinational logic cost is usually not a big deal. While having a ton of fixed-function matrix multipliers does raise the cost of combinational logic quite a bit, at most what they have will probably cut the power of an overall accelerator by 10-20% or so.
> In this section, we demonstrate that L-Mul can replace tensor multiplications in the attention mechanism without any loss of performance, whereas using fp8 multiplications for the same purpose degrades inference accuracy
I may have missed it in the paper, but they have provided no details on (re)scaling and/or using higher precision accumulation for intermediate results as one would experience on an H100 for instance. Without this information, I don't trust these evaluation results either.
Isn't this just taking advantage of "log(x) + log(y) = log(xy)"? The IEEE754 floating-point representation stores floats as sign, mantissa, and exponent -- ignore the first two (you quantitized anyway, right?), and the exponent is just an integer storing log() of the float.
Not quite: It's taking advantage of (1+a)(1+b) = 1 + a + b + ab.
And where a and b are both small-ish, ab is really small and can just be ignored.
So it turns the (1+a)(1+b) into 1+a+b. Which is definitely not the same! But it turns out, machine guessing apparently doesn't care much about the difference.
This has been done for decades in digital circuits, FPGA’s, Digital Signal Processing, etc. Floating point is both resource and power intensive and using FP without the use of dedicated FP processing hardware is something that has been avoided and done without for decades unless absolutely necessary.
Right, the ML people are learning, slowly, about the importance of optimizing for silicon simplicity, not just reduction of symbols in linear algebra.
Their rediscovery of fixed point was bad enough but the “omg if we represent poses as quaternions everything works better” makes any game engine dev for the last 30 years explode.
Maybe I am just a natural skeptic, but whenever I see a headline that says 'method x reduces y by z%'; but when you read the text it instead says that optimizing some step 'could potentially reduce y by up to z%'; I am suspicious.
Why not publish some actual benchmarks that prove your claim in even a few special cases?
Well, one, because the headline isn't from the researchers, its from a popular press report (not even the one posted here, originally, this is secondary reporting of another popular press piece) and isn't what the paper claims so it would be odd for the paper's authors to conduct benchmarks to justify it. (And, no, even the "up to 95%" isn't from the paper, the cost savings are cited per operation depending on operation and the precision the operation is conducted at, are as high as 97.3%, are based on research already done establishing the energy cost of math operations on modern compute hardware, but no end-to-end cost savings claim is made.)
And, two, because the actual energy cost savings claimed aren't even the experimental question -- the energy cost differences between various operations on modern hardware have been established in other research, the experimental issue here was whether the mathematical technique that enables using the lower energy cost operations performs competitively on output quality with existing implementations when substituted in for LLM inference.
OTOH you have a living proof that an amazingly huge neural network can work on 20W of power, so expecting multiple orders of magnitude in power consumption reduction is not unreasonable.
"The first release of bitnet.cpp is to support inference on CPUs. bitnet.cpp achieves speedups of 1.37x to 5.07x on ARM CPUs, with larger models experiencing greater performance gains. Additionally, it reduces energy consumption by 55.4% to 70.0%, further boosting overall efficiency. On x86 CPUs, speedups range from 2.37x to 6.17x with energy reductions between 71.9% to 82.2%. Furthermore, bitnet.cpp can run a 100B BitNet b1.58 model on a single CPU, achieving speeds comparable to human reading (5-7 tokens per second), significantly enhancing the potential for running LLMs on local devices. More details will be provided soon."
Because as disappointing as modern life is, you need clickbait headlines to drive traffic. You did the right thing by reading the article though, that's where the information is, not the title.
Obviously, energy cost creates a barrier to entry, so reduction of cost reduces the barrier to entry... which adds more players... which increases demand.
I don't think algorithms will change energy consumption. There is always max capacity needed in terms of computing. If tomorrow a new algorithm increases the performance 4 times, we will just have 4 times more computing.
In the end the power consumption means the current models that are "good enough" will fit a much smaller compute budget such as edge devices. However, enthusiasts are still going to want the best hardware they can afford because inevitably, everyone will want to maximize the size and intelligence of a model they can run. So we're just going to scale. This might bring a GPT-4 level to edge devices, but we are still going to want to run what might resemble a GPT-5/6 model on the best hardware possible at the time. So don't throw away your GPU's yet. This will bring capabilities to mass market, but your high end GPU will still scale the solution n-fold and youll be able to run models with disregard to the energy savings promoted in the headline.
In other sensationalized words: "AI engineers can claim new algorithm allows them to fit GPT-5 in an RTX5090 running at 600 watts."
This isn't really the optimization I'm think about, but: given the weird and abstract nature of the functioning of ML in general and LLMs in particular, it seems reasonable to think that there might be algorithms that achieve the same, or a similar, result in an orders-of-magnitude more efficient way.
The trend of hyping up papers too early on is eroding people's faith in science due to poor journalism failing to explain that this is theoretical. The outlets that do this should pay the price but they don't, because almost every outlet does it.
i am not well versed in the math involved, but IMO if the outcome depends mostly on the differences between them numbers, as smaller-or-bigger distinction as well as their magnitudes, then exactness might not be needed. i mean, as long as the approximate "function" looks similar to the exact one, that might be good enough.
Maybe even generate a table of the approximate results and use that, in various stages? Like the way sin/cos was done 30y ago before FP coprocessors arrived
Hypothetically, if this is true and simple as the headline implies -- AI using 95% less power doesn't mean AI will use 95% less power, it means we will do 20x more AI. As long as it's the current fad, we will throw as much power and resources at this as we can physically produce, because our economy depends on constant, accelerating growth.
Bad news for Nvidia how? Even ignoring that the power savings are only on one type of instruction, 20x less power doesn't mean it runs 20x faster. You still need big fat GPUs.
If this increases integer demand and decreases floating point demand, that moderately changes future product design and doesn't do much else.
People say this but then the fastest and most-used implementation of these optimizations is always written in CUDA. If this turns out to not be a hoax, I wouldn't be surprised to see Nvidia prices jump in correlation.
Wouldn't reduced power consumption for an unfulfilled demand mean more demand for Nvida as they now need more chips to max out amount of power usage to capacity? (As concentration tends to be the more efficient way.)
I wonder if someone has feed this entire "problem" into the latest Chat GPT-01 (the new model with reasoning capability), and just fed it in all the code for a Multilayer Perceptron and then given it the task/prompt of finding ways to implement the same network using only integer operations.
Surely even the OpenAI devs must have done this like the minute they got done training that model, right? I wonder if they'd even admit it was an AI that came up with the solution rather than just publishing it, and taking credit. haha.
> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.
> Alas, that does not remotely resemble how people are pitching this technology.
djoldman|1 year ago
ABSTRACT
Large neural networks spend most computation on floating point tensor multiplications. In this work, we find that a floating point multiplier can be approximated by one integer adder with high precision. We propose the linear-complexity multiplication (L-Mul) algorithm that approximates floating point number multiplication with integer addition operations. The new algorithm costs significantly less computation resource than 8-bit floating point multiplication but achieves higher precision. Compared to 8-bit floating point multiplications, the proposed method achieves higher precision but consumes significantly less bit-level computation. Since multiplying floating point numbers requires substantially higher energy compared to integer addition operations, applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by elementwise floating point tensor multiplications and 80% energy cost of dot products. We calculated the theoretical error expectation of L-Mul, and evaluated the algorithm on a wide range of textual, visual, and symbolic tasks, including natural language understanding, structural reasoning, mathematics, and commonsense question answering. Our numerical analysis experiments agree with the theoretical error estimation, which indicates that L-Mul with 4-bit mantissa achieves comparable precision as float8 e4m3 multiplications, and L-Mul with 3-bit mantissa outperforms float8 e5m2. Evaluation results on popular benchmarks show that directly applying L-Mul to the attention mechanism is almost lossless. We further show that replacing all floating point multiplications with 3-bit mantissa L-Mul in a transformer model achieves equivalent precision as using float8 e4m3 as accumulation precision in both fine-tuning and inference.
onlyrealcuzzo|1 year ago
Presumably there will be a lot of interest.
etcd|1 year ago
Here https://news.ycombinator.com/item?id=41784591 but even before that. It is possibly one of those obvious ideas to people steeped in this.
To me intuitively using floats to make ultimatelty boolean like decisions seems wasteful but that seemed like the way it had to be to have diffetentiable algorithms.
yogrish|1 year ago
mvkel|1 year ago
jart|1 year ago
I tried implementing this for AVX512 with tinyBLAS in llamafile.
Then I used it for Llama-3.2-3B-Instruct.F16.gguf and it outputted jibberish. So you would probably have to train and design your model specifically to use this multiplication approximation in order for it to work. Or maybe I'd have to tune the model so that only certain layers and/or operations use the approximation. However the speed was decent. Prefill only dropped from 850 tokens per second to 200 tok/sec on my threadripper. Prediction speed was totally unaffected, staying at 34 tok/sec. I like how the code above generates vpternlog ops. So if anyone ever designs an LLM architecture and releases weights on Hugging Face that use this algorithm, we'll be able to run them reasonably fast without special hardware.raluk|1 year ago
kayo_20211030|1 year ago
manquer|1 year ago
The 95% gains is specifically only for multiplication operations, inference is compute light and memory heavy in the first place so the actual gains would be far less smaller .
Tech journalism (all journalism really) can hardly be trusted to publish grounded news with the focus on clicks and revenue they need to survive.
throwawaymaths|1 year ago
If I could have a SWAG at it I would say a low resolution model like llama-2 would probably be just fine (llama-2 quantizes without too much headache) but a higher resolution model like llama-3 probably not so much, not without massive retraining anyways.
Randor|1 year ago
https://github.com/microsoft/BitNet
vlovich123|1 year ago
kayo_20211030|1 year ago
I'm not claiming it's not possible, nor am I claiming that it's not true, or, at least, honest.
But, there will need to be evidence that using real machines, and using real energy an _equivalent performance_ is achievable. A defense that "there are no suitable chips" is a bit disingenuous. If the 95% savings actually has legs some smart chip manufacturer will do the math and make the chips. If it's correct, that chip making firm will make a fortune. If it's not, they won't.
stefan_|1 year ago
jhj|1 year ago
> In this section, we show that L-Mul is more precise than fp8 e4m3 multiplications
> To be concise, we do not consider the rounding to nearest even mode in both error analysis and complexity estimation for both Mul and L-Mul
These two statements together are non-sensical. Sure, if you analyze accuracy while ignoring the part of the algorithm that gives you accuracy in the baseline you can derive whatever cherry-picked result you want.
The multiplication of two floating point values if you round to nearest even will be the correctly rounded result of multiplying the original values at infinite precision, this is how floating point rounding usually works and what IEEE 754 mandates for fundamental operations if you choose to follow those guidelines (e.g., multiplication here). But not rounding to nearest even will result in a lot more quantization noise, and biased noise at that too.
> applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by elementwise floating point tensor multiplications and 80% energy cost of dot products
A good chunk of the energy cost is simply moving data between memories (especially external DRAM/HBM/whatever) and along wires, buffering values in SRAMs and flip-flops and the like. Combinational logic cost is usually not a big deal. While having a ton of fixed-function matrix multipliers does raise the cost of combinational logic quite a bit, at most what they have will probably cut the power of an overall accelerator by 10-20% or so.
> In this section, we demonstrate that L-Mul can replace tensor multiplications in the attention mechanism without any loss of performance, whereas using fp8 multiplications for the same purpose degrades inference accuracy
I may have missed it in the paper, but they have provided no details on (re)scaling and/or using higher precision accumulation for intermediate results as one would experience on an H100 for instance. Without this information, I don't trust these evaluation results either.
_aavaa_|1 year ago
codethief|1 year ago
remexre|1 year ago
mota7|1 year ago
So it turns the (1+a)(1+b) into 1+a+b. Which is definitely not the same! But it turns out, machine guessing apparently doesn't care much about the difference.
convolvatron|1 year ago
robomartin|1 year ago
https://news.ycombinator.com/item?id=41816598
This has been done for decades in digital circuits, FPGA’s, Digital Signal Processing, etc. Floating point is both resource and power intensive and using FP without the use of dedicated FP processing hardware is something that has been avoided and done without for decades unless absolutely necessary.
fidotron|1 year ago
Their rediscovery of fixed point was bad enough but the “omg if we represent poses as quaternions everything works better” makes any game engine dev for the last 30 years explode.
ausbah|1 year ago
ujikoluk|1 year ago
didgetmaster|1 year ago
Why not publish some actual benchmarks that prove your claim in even a few special cases?
dragonwriter|1 year ago
And, two, because the actual energy cost savings claimed aren't even the experimental question -- the energy cost differences between various operations on modern hardware have been established in other research, the experimental issue here was whether the mathematical technique that enables using the lower energy cost operations performs competitively on output quality with existing implementations when substituted in for LLM inference.
baq|1 year ago
andrewstuart|1 year ago
"The first release of bitnet.cpp is to support inference on CPUs. bitnet.cpp achieves speedups of 1.37x to 5.07x on ARM CPUs, with larger models experiencing greater performance gains. Additionally, it reduces energy consumption by 55.4% to 70.0%, further boosting overall efficiency. On x86 CPUs, speedups range from 2.37x to 6.17x with energy reductions between 71.9% to 82.2%. Furthermore, bitnet.cpp can run a 100B BitNet b1.58 model on a single CPU, achieving speeds comparable to human reading (5-7 tokens per second), significantly enhancing the potential for running LLMs on local devices. More details will be provided soon."
TheRealPomax|1 year ago
GistNoesis|1 year ago
mattxxx|1 year ago
Obviously, energy cost creates a barrier to entry, so reduction of cost reduces the barrier to entry... which adds more players... which increases demand.
gosub100|1 year ago
narrator|1 year ago
holoduke|1 year ago
Art9681|1 year ago
In other sensationalized words: "AI engineers can claim new algorithm allows them to fit GPT-5 in an RTX5090 running at 600 watts."
gcanyon|1 year ago
greenthrow|1 year ago
panosv|1 year ago
ein0p|1 year ago
idiliv|1 year ago
hello_computer|1 year ago
https://arxiv.org/abs/2307.01415
selimthegrim|1 year ago
littlestymaar|1 year ago
unknown|1 year ago
[deleted]
andrewstuart|1 year ago
https://github.com/microsoft/BitNet
syntaxing|1 year ago
creativenolo|1 year ago
I had assumed the latency etc were based on what was desirable for the use case and hardware, rather than power consumption.
asicsarecool|1 year ago
svilen_dobrev|1 year ago
Maybe even generate a table of the approximate results and use that, in various stages? Like the way sin/cos was done 30y ago before FP coprocessors arrived
m463|1 year ago
DennisL123|1 year ago
andrewstuart|1 year ago
For he sake of the climate and environment it would be nice to be true.
Bad news for Nvidia. “Sell your stock” bad.
Does it come with a demonstration?
mouse_|1 year ago
Dylan16807|1 year ago
If this increases integer demand and decreases floating point demand, that moderately changes future product design and doesn't do much else.
talldayo|1 year ago
People say this but then the fastest and most-used implementation of these optimizations is always written in CUDA. If this turns out to not be a hoax, I wouldn't be surprised to see Nvidia prices jump in correlation.
Nasrudith|1 year ago
faragon|1 year ago
DrNosferatu|1 year ago
Wheatman|1 year ago
m3kw9|1 year ago
tartakovsky|1 year ago
DesiLurker|1 year ago
unknown|1 year ago
[deleted]
nprateem|1 year ago
neuroelectron|1 year ago
quantadev|1 year ago
Surely even the OpenAI devs must have done this like the minute they got done training that model, right? I wonder if they'd even admit it was an AI that came up with the solution rather than just publishing it, and taking credit. haha.
chx|1 year ago
https://hachyderm.io/@inthehands/112006855076082650
> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.
> Alas, that does not remotely resemble how people are pitching this technology.