top | item 31549050

Why does this code execute more slowly after strength-reducing multiplications?

434 points| ttsiodras | 3 years ago |stackoverflow.com

148 comments

order

sampo|3 years ago

In the post, multiplications and 2 additions are not faster than 2 additions.

The post compares (1) loop code that can be vectorized, as loop rounds are independent and do not depend on the result from the previous round, and (2) an "optimization" that makes calculations shorter, but also makes each loop round depend on the result of the previous round, so this cannot be vectorized.

sillysaurusx|3 years ago

More generally, “stop storing state, unless it makes the program less complicated.”

The first version is simple. The second version is more complicated. The simplicity is because the first version can be represented as a formula; imagine trying to write out the second version as a formula, and the complexity becomes obvious.

(The addition loop would have to be written as a recurrence relation, which of course means every step depends on the previous step.)

Complexity isn’t always obvious. It’s sometimes common in C programs to write a loop like dst[index++] = src[i], particularly in deeply nested for-loops. In my experience it’s almost always worth rewriting it so that the indices are computable entirely from the loop iteration variables, with no state. It helps you build a mental map of the operations, because you can think of it in terms of geometry (memcpy = copying rectangles onto other larger rectangles) whereas when the index is stateful it becomes quite a lot harder to visualize in higher dimensions. At least for me.

We’ve been building a compiler at Groq for our custom ML hardware. (Roughly, “turn pytorch into our ISA with no extra programmer effort.”) I used to think of posts like this as “Well, modern compilers are so fast, who really cares about autovectorization?” — it turns out you care when you need to write your own compiler. :)

MLIR is pretty cool. It makes a lot of transformations like this pretty easy. The MLIR “krnl” dialect can also automatically transform nested loops into tiled iteration. Graphics devs will know what I mean — no need to loop over 8x8 blocks of pixels manually, just write “for x in range(width): for y in range(height): …” and set the block size to 8.

magicalhippo|3 years ago

> In the post, multiplications and 2 additions are not faster than 2 additions.

Reminded me of when I tried to optimize some code using assembly on x86, maybe 8-10 years go. The compiler spit out a DIV [EDI] instruction which was inside a hot loop. I saw how I could easily rewrite it to use a register instead of the indirect memory access.

So I did, and it was significantly slower, like 10-15%... I even took care to align the loop target address and so on. If I changed my code to use the memory indirection it was measurably faster.

And with that I decided hand-optimizing x86 assembly was definitely a skill to leave behind.

dreamcompiler|3 years ago

The other issue is instruction-level parallelism, as another poster in TFA pointed out. Even within a single loop iteration the "unoptimized" code is more likely to exploit multiple ALUs if they exist, regardless of vectorization instructions.

dang|3 years ago

Ok, we've reverted the title to what the article says.

(Submitted title was "Multiplications and 2 additions are faster than 2 additions")

eklitzke|3 years ago

To be precise, they're both "vectorized" in the sense that both versions are using SSE vector instructions (and in fact, Clang will even generate AVX instructions for the first version if you use -mavx2). The difference is really the data dependency which has a massive effect on the ability of the CPU to pipeline the operation.

For the first version w/ AVX I get:

$ perf stat ./a.out [-] Took: 225634 ns.

Performance counter stats for './a.out':

            247.34 msec task-clock:u              #    0.998 CPUs utilized          
                 0      context-switches:u        #    0.000 /sec                   
                 0      cpu-migrations:u          #    0.000 /sec                   
             2,009      page-faults:u             #    8.122 K/sec                  
       960,495,151      cycles:u                  #    3.883 GHz                    
     2,125,347,630      instructions:u            #    2.21  insn per cycle         
        62,572,806      branches:u                #  252.982 M/sec                  
             3,072      branch-misses:u           #    0.00% of all branches        
     4,764,794,900      slots:u                   #   19.264 G/sec                  
     2,298,312,834      topdown-retiring:u        #     48.2% retiring              
        37,370,940      topdown-bad-spec:u        #      0.8% bad speculation       
       186,854,701      topdown-fe-bound:u        #      3.9% frontend bound        
     2,242,256,423      topdown-be-bound:u        #     47.1% backend bound         

       0.247734256 seconds time elapsed

       0.241338000 seconds user
       0.004943000 seconds sys

For the second version with SSE and the data dependency I get:

$ perf stat ./a.out [-] Took: 955104 ns.

Performance counter stats for './a.out':

            975.30 msec task-clock:u              #    1.000 CPUs utilized          
                 0      context-switches:u        #    0.000 /sec                   
                 0      cpu-migrations:u          #    0.000 /sec                   
             2,010      page-faults:u             #    2.061 K/sec                  
     4,031,519,362      cycles:u                  #    4.134 GHz                    
     3,400,341,362      instructions:u            #    0.84  insn per cycle         
       200,073,542      branches:u                #  205.140 M/sec                  
             3,192      branch-misses:u           #    0.00% of all branches        
    20,091,613,665      slots:u                   #   20.600 G/sec                  
     3,110,995,283      topdown-retiring:u        #     15.5% retiring              
       236,371,925      topdown-bad-spec:u        #      1.2% bad speculation       
       236,371,925      topdown-fe-bound:u        #      1.2% frontend bound        
    16,546,034,782      topdown-be-bound:u        #     82.2% backend bound         

       0.975762759 seconds time elapsed

       0.967603000 seconds user
       0.004937000 seconds sys
As you can see the first version gets nearly 3x better IPC (2.21 vs 0.84) and spends half as much time being backend bound.

mike_hock|3 years ago

The first comment on the code does say precisely this.

Arwill|3 years ago

SIMD vectorization is like quantum mechanics, it becomes classical if you look.

dragontamer|3 years ago

Tldr: autovectorizer transforms the first code into SIMD instructions, but the second into 64 bit instructions.

SIMD is very powerful, and modern compilers can sometimes simd-ify your code automatically.

-------

The second code can probably become SIMD as well, but it's beyond GCC's ability to autovectorizer it in that form. I kinda want to give it a go myself but don't have time today...

Autovectorizers are mysterious, often harder to see and use than explicitly SIMD code (like OpenCL or CUDA).

loeg|3 years ago

The other essential insight is that the second version has data dependencies between loop iterations. Without that, the tldr is incomplete.

MauranKilom|3 years ago

> The second code can probably become SIMD as well, but it's beyond GCC's ability to autovectorizer it in that form.

Usually, for floating point operations the compiler simply has no chance to do anything clever. NaNs, infinities and signed zero mean that even the most "obvious" identities don't actually hold. For example, x + 0 == x (where the == is bitwise) does not hold for x = -0.

parenthesis|3 years ago

How can one go about making one's code apt for a compiler to be able to do these kinds of things?

mgaunard|3 years ago

The problem is not the ability of the compiler, is that it's not numerically equivalent, so the compiler isn't allowed to do that optimization.

zeusk|3 years ago

How would the second approach be vectorized given each iteration's input has dependence on previous iteration's output??

rasz|3 years ago

both versions use SSE and are pipelined, the problem with the second one is data dependency, only two adds but the second one directly depends on the first ones result = stall

ascar|3 years ago

> Tldr: autovectorizer transforms the first code into SIMD instructions, but the second into 64 bit instructions.

That's not the tldr. It's actually more wrong than right.

The actual TLDR is: loop dependencies prohibit the compiler _and_ your CPU from parallelizing. The SIMD part is the compiler, the more important part here is the CPU though.

Long explanation:

Let's start with the compiler. Yes, the first version can be vectorized and you can see that in the included screenshots of the assembly output. The use of addpd [0] (add 2 pairs of 64bit doubles in simultaneously) instead of addsd (add 1 pair of 64bit doubles). Both operations have identical throughput/latency on any architecture not too ancient (e.g. check information of the corresponding intrinsic here [3]. Unfortunately Agner instruction_tables doesn't include the ADDSD for most architectures.) So while you can crunch 2 operations using SIMD at the same time, you are still doing 3 multiplications and 2 additions instead of 2 additions. The multiplications and addition have the same throughput at least on Skylake. So we can use simple math and 5/2 is still more than 2!

Now to the CPU part. The loop dependency prohibits the compiler to properly utilize instruction pipelining [4] and now differences in throughput vs latency come into play. In a pipelined scenario the CPU can work on multiple steps of the instruction cycle in parallel (from fetching the instruction, then decoding it, to executing the operation and eventually writing the result back to the register) and thus achieve higher throughput. However, if your next instruction depends on the instruction before the CPU has to finish all the steps of a pipeline from instruction start to finish before the next instruction can start. This is the latency of an instruction. For example mulsd/mulpd have 8 times higher latency than throughput on Intel Skylake.

So while SIMD comes in at a factor of 2, the loop dependency stops the CPU from realizing a potential factor of 8.

Peter Cordes also correctly points this out in his comments and answer to the question on stackoverflow.

[1] https://www.felixcloutier.com/x86/addpd

[2] https://www.felixcloutier.com/x86/addsd

[3] https://www.intel.com/content/www/us/en/docs/intrinsics-guid...

[4] https://en.wikipedia.org/wiki/Instruction_pipelining

mvuksano|3 years ago

I think it's worth pointing out that the reason why these two examples execute at different speed is due to how compiler translated code AND because CPU was able to parallelize work. Compilers take knowledge about target platform (e.g. instruction set) and code and translate it into executable code. Compiler CAN (but doesn't have to) rewrite code only if it ALWAYS produces the same result as input code.

I feel like last 110-15 years (majority of) people have stopped thinking about specific CPU and only think about ISA. That works for a lot of workloads but in recent years I have observed that there is more and more interest in how specific CPU can execute code as efficiently as possible.

If you're interested in the kind of optimizations performed in the example you should check out polyhedral compilation (https://polyhedral.info/) and halide (https://halide-lang.org/). Both can be used to speed up certain workloads significantly.

manholio|3 years ago

Seems like you can have the cake and eat it too, by manually parallelizing the code to something like this:

    double A4 = A+A+A+A;
    double Z = 3A+B;
    double Y1 = C;
    double Y2 = A+B+C;

    int i;
    // ... setup unroll when LEN is odd...

    for(i=0; i<LEN; i++) {
        data[i] = Y1;
        data[++i] = Y2;
        Y1 += Z;
        Y2 += Z;
        Z += A4;
    }
Probably not entirely functional as written, but you get the idea: unroll the loop so that the data dependent paths can each be done in parallel. For the machine being considered, a 4 step unroll should achieve maximum performance, but of course, you get all the fun things that come with hard-coding the architecture in your software.

im3w1l|3 years ago

The idea is right, but some details are wrong. You need a separate Z for each Y. But even if that's done, it is indeed faster.

oconnor663|3 years ago

The part about data dependencies across loop iterations is fascinating to me, becuase it's mostly invisible even when you look at the generated assembly. There's a related optimization that comes up in implementations of ChaCha/BLAKE, where we permute columns around in a kind of weird order, because it breaks a data dependency for an operation that's about to happen: https://github.com/sneves/blake2-avx2/pull/4#issuecomment-50...

ummonk|3 years ago

The pipelining issue is interesting because my reaction becomes “shouldn’t the CPU just come with a larger vector size and then operate on chunks within the vector to optimize pipelining?” but then I realize I’m just describing a GPU.

someweirdperson|3 years ago

I doubt repeatedly adding floating point numbers is a good idea. With every addition the sum increases further away from the addend, and with their growing relative difference problems grow as well.

Just because the algebra works it doesn't guarantee that the code does, too.

ascar|3 years ago

Your points are factually correct, but in practice not a big concern, if your floating point value is much more precise than necessary for the numbers used.

E.g. if you use doubles for the range of 32bit integers even adding 1.0 2 billion times to 2 billion still ends up at 4 billion. Even adding 1.0 20 billion times to 20 billion ends up at 40 billion. Now adding 0.1 20 billion times to 2 billion ends up 1908 short on my CPU, i.e. about 19080 absorptions/rounding errors occured. You need some serious differences and amount of operations to actually trigger errors.

sray|3 years ago

The author of the post is aware of this. They explicitly say that is not the point of the question.

LAC-Tech|3 years ago

The algebra works for reals, binary floating point is a different beast!

foxes|3 years ago

It’s almost like managing the state yourself on a modern cpu is a complicated task. Automatic vectorisation, reordering, etc can all be more easily done to a pure declarative program. Imperatively managing the state and the control flow with an ancient language like C etc really does no longer reflect the underlying hardware which is significantly more advanced.

savant_penguin|3 years ago

What would be the difference in power consumption from each method? (Would it be always better to multiply? If so why not multiply by one?)

chaboud|3 years ago

The general rule to follow in power consumption on CPUs is to do your work quickly and then get to sleep. Propagating clock is going to eat the bulk of your power. The mild difference between multiply and add in actual usage is inside the noise (orders of magnitude smaller). The bigger penalty in this case is the inter-iteration dependency, which, vectorized or not, runs the risk of holding up the whole show due to pipelining.

As a performance rule on modern processors: avoid using the result of a calculation as long as you reasonably can (in tight loops... You don't want to be out of cache.).

Have fun threading the needle!

dento|3 years ago

Usually faster version always consumes less power, as this allows the core more time in sleep. This is known as the race-to-sleep or race-to-idle paradox.

varajelle|3 years ago

The problem here is that the additions depends on values computed in the previous iteration of the loop. The version with multiplication is faster because there is no dependencies with the previous iteration so the CPU has more freedom scheduling the operations.

The power consumption is a good question.

kllrnohj|3 years ago

In this case since the slower one is spending most of its time stalled out, the faster one will probably be more power efficient. This isn't an AVX512 power-gobbling monstrosity issue.

ww520|3 years ago

Besides the autovectorization, the second version also has two additional assignments. Depending on how the storage is used for these, whether it’s to registers, L1/L2, or stack, there might be performance hit.

ummonk|3 years ago

Why would it be in anything but registers?

mgaunard|3 years ago

It's fairly obvious: the rewrite prevents parallelization because floating-point isn't associative.

You'd need to parallelize it explicitly (which can be done by just unrolling the loop).

ribit|3 years ago

First: we need to finally stop the harmful myth that floating point multiplication is slower than addition. This has not been true for a long while.

Second: why are so many people insisting that the loop is auto-vectorised? Is there any evidence to that? Data dependencies alone explain the observed performance delta. Auto-vectorization would have resulted in a higher speedup.

tomxor|3 years ago

Vectorisation is not free, There is one other dimension to optimise for: power.

The suggested "slower" optimisation does fundamentally use less instructions. Chucking more hardware at parallelisable problems makes it run faster but does not necessarily reduce the power requirements much because there are fundamentally the same number of instructions, it's just gobbling up the same power over a shorter period of time - The "slower" serial algorithm uses less instructions and in theory less power in total. Disclaimer: I mean power vs speed fundamentally in theory, in practice there may be no measurable difference depending on the particular CPU and code.

ascar|3 years ago

Do you have a reference for that? I googled and I couldn't find anything good.

While it might sound intuitive that SIMD instructions consume more power, I don't think that's necessarily true to a relevant degree in practice. My understanding is CPU power consumption is mostly tied to inefficiences that cause energy loss via heat, while the actual computation doesn't consume any energy per se. So electrons traveling a more complex path probably cause somehwat more energy loss as there is more wire/transistors to pass. But most of the total loss doesn't actually occure in the ALU. Empirically from what you can see operating systems do, the most effective way of consuming less power is actually running on a slower clock cycle and the most effective way to achieve that is getting work done faster and that's not tied to the number of instructions.

The Stackoverflow question here [1] seems to suggest that SIMD vs no SIMD has a neglectable overhead compared to entering a lower power state sooner.

[1] https://stackoverflow.com/questions/19722950/do-sse-instruct...

tylerhou|3 years ago

But you’re not counting static power of the CPU and the whole system. It is possible for the vectorized code to consume less power overall because it can complete faster, using less static power.

janwas|3 years ago

We mean energy, right? The surprising fact is that the energy cost of executing an instruction (scheduling etc.) is much higher than the actual operation. Thus SIMD amortizes that per-instruction overhead over multiple elements, leading to 5-10x gains ( https://pdfs.semanticscholar.org/f464/74f6ae2dde68d6ccaf9f53...).

Even in this example, which apparently has 4x vector instructions vs 1x scalar, AVX-512 (and probably even AVX2) would reduce energy usage because they can do 8 (or 4 for AVX2) 64-bit elements per cycle.

gpderetta|3 years ago

The slower algo will use more instructions as will have to run for more iterations. The faster algo will use wider ALUs that are more power hungry, so it appears a wash. But because less instructions are in flight, less power need to be spent to keep track of them or renaming registers, caches need to powered for a smaller amount of time and so on.

tpolzer|3 years ago

Unless we're talking extremes (AVX512...), optimizing for power in modern CPUs is almost always optimizing the race to idle.

I.e. the CPU doesn't use much less power if most ALUs are idle, so you might as well use all of them to compute redundant results -> and then turn ~everything off sooner.

andyjohnson0|3 years ago

I curious about how an optimiser determines that a block of code can be vectorised. It's trivial to see this in the initial version of compute() but I'm not sure how an optimiser does this. Is it as simple as checking that A, B, and C are cost?

And how far would an optimiser typically take its analysis? For example, if B was defined inside the loop as (non -const) A * 2 ? Or as A * f() , where f() always returns 2, maybe using a constant or maybe a calculation etc.

Seems like a very hard problem,

benreesman|3 years ago

Oh man I have been smoked by data-dependency like this so many times. I've gotten a lot better over the years at "seeing" data dependencies in godbolt or whatever, but it still slips through by my code and that of my colleagues way more often than I'd like.

Is anyone aware of good tooling for automatically catching this sort of thing even some of the time?

slaymaker1907|3 years ago

I'm not sure if there are existing rules for it, but you could write a CodeQL query looking for data dependencies in loops. Obviously dependencies are sometimes required, but it at least could tell your they were there.

ummonk|3 years ago

Don’t use state in a loop unless you have to? Avoiding such use of state also makes code a lot easier to read and debug - the performance benefits are really just a bonus.

btdmaster|3 years ago

I'm running into an interesting result -- on my machine, the fast version runs at the same speed at LEN=1000000 (the default in the sample), but starts running 1.5 times faster at LEN=500000 and ends up twice as fast at LEN=100000 and lower. This is with gcc -O3. Why could this be?

rasz|3 years ago

cache size pushing you out to high latency memory

shp0ngle|3 years ago

For fun, I tried this in go on M1 Mac (basically because benchmarking in go is so easy)

And... the two codes runs with exactly the same speed, on M1 Mac, with go.

edit: of course, with go, you can manually parallelize the faster option with goroutines... but that does something else, doesn't it. (and it's 500x faster.)

hayley-patton|3 years ago

Does the Go compiler auto-vectorise? Vectorisation appears to have been the cause of the weird performance.

oezi|3 years ago

Funny that this question gained 180 upvotes in 10 days when it also could have received the reverse for being quite lacking in things the author has tried to figure the (rather obvious) data dependency out on his own.

LAC-Tech|3 years ago

I'll put my hand up and say none of the post or answers were obvious to me. I found it all very interesting.

lawrenceyan|3 years ago

Given that in terms of “absolute work” done, the optimization does hold true, is there any situation where it would be beneficial to implement this?

(Super low energy processors, battery powered, etc?

pjscott|3 years ago

Certainly! It makes sense in processors that don't do SIMD or speculative execution. There are a lot of those, but mostly for embedded stuff.

desperate|3 years ago

How would the energy cost of the two methods compare?

Beltiras|3 years ago

Is the implication here that tail call optimizations don't work anymore? They might seem to do the proper thing on the language level but the CPU just can't think that way.

readams|3 years ago

Tail calls will work the same as loop. So if you have data dependencies between loop iterations or between tail-call-eliminated stack frames, then it will be slower than if you do not have those dependencies.

Retr0id|3 years ago

How does this relate to tail calls?

AtNightWeCode|3 years ago

On older arcs there is also a cost for casting int to float. Not like float to int but anyway.

The general advice when I worked with CG was to use deltas in iterations.

paradite|3 years ago

Is this kind of parallelization possible on only compiled language? Or is it also possible for interpreted language like JavaScript?

garethrowlands|3 years ago

The same considerations apply, though not always. JavaScript can be JIT compiled, though the JIT might not make the same optimisations as a C compiler. It's running on the same CPU though.

dgb23|3 years ago

If we ignore the details, then this is a perfect example of how simpler code should be preferred _by default_ over complex code. Both for performance and reasoning. In this case and often elsewhere, these are related things.

I'm taking the definition of simple and complex (or complected) from this talk[0]. In short: Simple means individual pieces are standing on their own, not necessarily easy or minimal. Complex means individual pieces are braided together into a whole.

The first code is simpler than the second. Just have a look at the two solutions and you see what I mean: The individual instructions in the first stand on their own and clearly declare what they mean as well. The second example is more complex, because each line/subexpression cannot be understood in isolation, there is an implicit coupling that requires the programmer _and_ the machine code interpreter to understand the whole thing for it to make sense. The CPU apparently fails at that and cannot optimize the machine instructions into more efficient microcode.

The example illustrates performance benefits of simpler code, but there are others too:

For example a type checker might be able to infer things from simpler code, but not from complex code. Again, complexity is about coupling, which can for example arise from making assumptions about things that are out of bounds or logic that happens in some other part of your program, which _this_ part relies on.

Things like these can either be outright rejected by a type checker or simply ignored and sometimes we can provide additional ceremony to make it happy. But there is always an underlying question when these issues arise: Can I express this in a simpler way? It's sometimes possible to describe rich semantics with simpler, more primitive types and explicit coordination.

Another thing that comes to mind are rich ORMs. They can be very convenient for common cases. But they have footguns for the general case, because they _complect_ so many things, such as validation, storage, domain rules, caching etc. And they are leaky abstractions because we have to do certain things in certain ways to avoid bloated SQL, N+1 etc.

They are not simple (and certainly not declarative) so we program against a black box. There are simpler "ORMs" that I very much like, but they typically only provide a minimal set of orthogonal features, such as query builders, and mapping results into plain data structures. It is simpler to use these kind of orthogonal tools.

Last but not least: Simple code can be pulled apart and changed more easily without having to worry about introducing problems. The substitution rule or referential transparency enables this by guaranteeing that each expression can be replaced by the value it will evaluate to. This also implies that other expressions that evaluate to the same value can be substituted freely.

[0] Simple Made Easy - Rich Hickey at Strangeloop 2011 https://www.youtube.com/watch?v=LKtk3HCgTa8

djmips|3 years ago

You can't ignore the details if you want the fastest performance. Unfortunately sometimes the solution that's the fastest is also more complex. Simplicity tends to win and also is less likely to be buggy, easier to maintain etc but sometimes the fastest code deliberately introduces coupling, hardware specific cache shenanigans or unreadable hand written asm and/or inlined SIMD. This is often counter to your hypothesis.

ummonk|3 years ago

The addition based version should be hand vectorizable if you do the math to figure out the increment based on vector sizes.

diarrhea|3 years ago

Is this relevant to interpreted languages as well? I’m thinking perhaps Pythons bytecode could have similar optimisations.

layer8|3 years ago

TLDR: Due to loop-carried dependencies preventing parallelized execution: https://en.wikipedia.org/wiki/Loop_dependence_analysis#Loop-...

In the 2 additions version, computation of the next iteration depends on the results of the preceding iteration. In the multiplication version, the computations are independent for each iteration, enabling parallel execution (by SIMD and/or pipelined/superscalar execution).

mlatu|3 years ago

you could try to prefill first cell with -(A+B), then each cell gets a+b+c and in a loop for (j=0; j<i; j++): (A+A)

but im to lazy to test that