top | item 18573308

Why is 2 * (i * i) faster than 2 * i * i in Java?

424 points| trequartista | 7 years ago |stackoverflow.com

100 comments

order

userbinator|7 years ago

So it's an issue of the optimizer; as is often the case, it unrolls too aggressively and shoots itself in the foot, all the while missing out on various other opportunities.

In my experience, loop unrolling should basically never be done except in extremely degenerate cases; I remember not long ago someone I know who also optimises Asm remarking "it should've died along with the RISC fad". The original goal was to reduce per-iteration overhead associated with checking for end-of-loop, but any superscalar/OoO/speculative processor can "execute past" those instructions anyway; all that unrolling will do is bloat the code and work against caching. Memory bandwidth is often the bottleneck, not the core.

pcwalton|7 years ago

> In my experience, loop unrolling should basically never be done except in extremely degenerate cases

Not true. Like many such optimizations, loop unrolling can be useful because it makes downstream loads constant.

For example:

    float identity[4][4];
    for (unsigned y = 0; y < 4; y++)
        for (unsigned x = 0; x < 4; x++)
            identity[y][x] = y == x ? 1 : 0;
    ... do some matrix math ...
In this case, the compiler probably wants to unroll the loops so that it can straightforwardly forward the constant matrix entries directly to the matrix arithmetic. It'll likely be able to eliminate lots of operations that way.

(You might ask "who would write this code?" As Schemers say: "macros do.")

See LLVM's heuristics: http://llvm.org/doxygen/LoopUnrollPass_8cpp.html#ad7c38776d7...

tomsmeding|7 years ago

In addition to the optimisations already mentioned, loop unrolling also typically enables vectorisation in compilers. You might argue that for vectorisation it is not exactly necessary to have the relevant oerations next to each other in a continuous instruction stream, but it makes the vectorisation pass a lot nicer and simpler (if it can be called that to begin with).

nonsince|7 years ago

Loop unrolling is useful only when it enables other optimisations. The most common is constant folding.

dragontamer|7 years ago

Fully agree.

In most cases on modern systems, small loops should remain compact as possible, to stay in the uop cache. The "for" loop overhead (the inc, cmp, and jmp instructions) effectively execute in parallel. Modern systems are highly out-of-order and the for-loop overhead is virtually nil.

nwmcsween|7 years ago

Well I've seen the opposite, try a naive string function of some sort (strlen, etc) now manually unroll.

astn|7 years ago

Yeah, sometimes the compiler unrolls too much and innocent looking one-liner can be compiled into a monstrosity like this:

https://godbolt.org/z/aKtko5

jepler|7 years ago

You should translate your program to C++ and build with clang ; it turns the loop into a single constant load. https://godbolt.org/z/slznbU

cryptonector|7 years ago

Did you read TFA? The author did that (though using GCC), and the reason the optimizer does what you see is undefined behavior due to signed integer overflow.

utopcell|7 years ago

GCC applies the same optimization and compiles to a single constant if overflow does not occur. To see this, you can change '1000000000' to '1000'.

ychen306|7 years ago

It's usually a good idea to turn loop bound into a variable when benchmarking a compiler, lest it optimizes the whole thing away like in this case.

beeforpork|7 years ago

With all the optimisations being implemented in compilers today, it is impressive to see how this opportunity to optimise is missed. Put differently, compiler writers bother about optimisations that gain 0.1% performance in some special cases, but others that could gain 20% performance are not implemented.

Why? Is this optimisation particularly difficult to implement? Or is it just missed low-hanging fruit? It sure looks easy (like: rearrange expressions to keep the expression tree shallow and left-branching to avoid stack operations).

acdha|7 years ago

Compiler developers have tons of benchmarks which they run. I’d bet that this is as simple as not being significant in their test suite, with a good chance that it’s both not as simple as it might seem or that there are impacts on more complicated code which is in their benchmark suite or a big customer’s app.

DannyBee|7 years ago

The truth is that the hotspot computer is pretty old at this point and never really implemented a lot of good, robust, and thorough optimizations (I've read the source every year or two). It does some stuff and hopes for the best.

This is why there is a real commercial jvm market with azul.

yifanl|7 years ago

It's possible that they're working in the frame of mind that there aren't any low-hanging fruit left after so many years of compiler optimizations and forget to even try.

unknown|7 years ago

[deleted]

techopoly|7 years ago

That just might be the most dedicated answer I've ever seen on Stack Overflow.

dreamcompiler|7 years ago

I thought at first this was because integer squaring is potentially faster than general integer multiplication and the compiler wasn't seeing the square operation in the second case, but that's not the explanation here.

garmaine|7 years ago

There isn’t an integer square opcode on any major processor architecture though, right?

ww520|7 years ago

I'm surprised it's not doing a left shift for the x2.

jcdavis|7 years ago

It is in the first example (the sal instruction)

podsnap|7 years ago

The graal behavior is a lot more sane:

    graal:
    [info] SoFlow.square_i_two   10000  avgt   10  5338.492 ± 36.624  ns/op   // 2 *\sum i * i
    [info] SoFlow.two_i_         10000  avgt   10  6421.343 ± 34.836  ns/op   // \sum 2 * i * i
    [info] SoFlow.two_square_i   10000  avgt   10  6367.139 ± 34.575  ns/op   // \sum 2 * (i * i)
    regular 1.8:
    [info] SoFlow.square_i_two   10000  avgt   10  6393.422 ± 27.679  ns/op
    [info] SoFlow.two_i_         10000  avgt   10  8870.908 ± 35.715  ns/op
    [info] SoFlow.two_square_i   10000  avgt   10  6221.205 ± 42.408  ns/op
The graal-generated assembly for the first two cases is nearly identical, featuring unrolled repetitions of sequences like

    [info]   0x000000011433ec03: mov    %r8d,%ecx
    [info]   0x000000011433ec06: shl    %ecx               ;*imul {reexecute=0 rethrow=0 return_oop=0}
    [info]                                                 ; - add.SoFlow::test_two_i_@15 (line 41)
    [info]   0x000000011433ec08: imul   %r8d,%ecx          ;*imul {reexecute=0 rethrow=0 return_oop=0}
    [info]                                                 ; - add.SoFlow::test_two_i_@17 (line 41)
    [info]   0x000000011433ec0c: add    %ecx,%r9d          ;*iadd {reexecute=0 rethrow=0 return_oop=0}
    [info]                                                 ; - add.SoFlow::test_two_i_@18 (line 41)
    [info]   0x000000011433ec0f: lea    0x5(%r11),%r8d     ;*iinc {reexecute=0 rethrow=0 return_oop=0}
    [info]                                                 ; - add.SoFlow::test_two_i_@20 (line 40)

while the third case does a single shl at the end.

    [info]   0x000000010e2918bb: imul   %r8d,%r8d          ;*imul {reexecute=0 rethrow=0 return_oop=0}
    [info]                                                 ; - add.SoFlow::test_square_i_two@15 (line 32)
    [info]   0x000000010e2918bf: add    %r8d,%ecx          ;*iadd {reexecute=0 rethrow=0 return_oop=0}
    [info]                                                 ; - add.SoFlow::test_square_i_two@16 (line 32)
    [info]   0x000000010e2918c2: lea    0x3(%r11),%r8d     ;*iinc {reexecute=0 rethrow=0 return_oop=0}
    [info]                                                 ; - add.SoFlow::test_square_i_two@18 (line 31)                                   
Both graal and C2 inline, but as usual the graal output is a lot more comprehensible.

bnegreve|7 years ago

I don't see how generating different code for the same mathematical expression can be a good thing.

The compiler should detect that the two expressions are strictly equivalent and generate whatever code it believes is the fastest.

Any idea why it is this way?

gnuvince|7 years ago

Because of integer overflows and floating-point operations, the notion of equivalent mathematical expressions is tricky.

    fn main() {
        let a: i8 = 125;
        let b: i8 = 3;
        let c: i8 = (a + b) / 2;
        let d: i8 = b + ((a - b) / 2);
        println!("{} {}", c, d);
    }
This program outputs `-64 64` although the computations of `c` and `d` are equivalent.

Here's another example using floating point numbers:

    fn main() {
        let mut total1: f32 = 0.0;
        let mut total2: f32 = 0.0;
        let mut counter1: f32 = 0.0;
        let mut counter2: f32 = 100.0;

        for _ in 0 .. 10001 {
            total1 += counter1;
            total2 += counter2;
            counter1 += 0.01;
            counter2 -= 0.01;
        }
        println!("{} {}", total1, total2);
    }
The output of this program is `500041.16 500012.16`, a difference of 25 for a program that computes the same result (unless I made a mistake).

fragmede|7 years ago

Ideologically, yes, the compiler should generate the fastest code possible for the same math expression.

However, the compiler ('s optimization step) is not magic and produces suboptimal code sometimes. Back when C was young, this was frequently the case (1970's and 1980's), so dropping into assembly to hand-code performance critical sections is just what people did, in order to get software to run smoothly.

Thankfully this is largely no longer the case, however it does still happen.

In Java's case, the JVM runs on top of multiple different architectures which makes optimization even more complicated.

Low-level instruction generation and optimization is just one topic under the umbrella of compiler design, which is a huge (and fascinating!) discipline to get into.

amelius|7 years ago

Because it's more work for the compiler to reduce the expression to something canonical (and it might even be impossible).

Also what good will it bring? What if the canonical expression triggers the slow path? Now you have no means to change it into the fast version.

Further, in the case of floating point operations, operation order matters for rounding. And with integer operations, the actual form used can be important for preventing overflow (of intermediate results).

alkonaut|7 years ago

Is Overflow UB so the compiler can choose to ignore the fact that 2x(i x i) could overflow differently from 2 x i x i?

I’m not sure it does overflow differently but I would expect overflow to behave consistently as written, and not be dependent on optimization, is that not the case?

BeeOnRope|7 years ago

Nothing you can do in pure Java code is UB in the C/C++ sense.

Koshkin|7 years ago

At first, I thought it was because i * i == -1.

microcolonel|7 years ago

I guess they do not use value numbering, which is typically how you get equivalent results for cases like this.

qwerty456127|7 years ago

IMHO some kind of logic preprocesor should take care of this before the actual compilation.

isbvhodnvemrwvn|7 years ago

How? Java is compiled to bytecode, you don't know the architecture of the system the code is going to run on. It's one of the reasons javac only implements the simplest optimizations possible (constants folding and the like)

polskibus|7 years ago

I wonder if the same applies to .net (fx/core).

pjmlp|7 years ago

Depends on the runtime.

You have the old JIT, replaced by RyuJIT on .NET 4.6 and .NET Core.

Then .NET Native, which does AOT compilation via the same backend as Visual C++.

Followed by Mono's JIT/AOT implementation.

Windows/Windows Phone 8.x used a Bartok derived compiler for the MDIL format.

Same applies to Java though, as the answer only goes through what Hotspot does, but there are many other JIT/AOT compilers for Java as well.

idiot2|7 years ago

[deleted]

networkimprov|7 years ago

Has anyone tried this with Go?

saagarjha|7 years ago

Tried what specifically? This particular example, or something similar where the compiler generates code with different speeds for seemingly equivalent code?

pmarreck|7 years ago

No, because come back when you’re a real language with a runtime error handler

JohnL4|7 years ago

The database is fast enough for a few extra trips to it, so this is definitely what we should be focusing on.

(My cup of bitterness doth overflow.)