(no title)
nicula | 1 year ago
This whole investigation started because I was writing some Rust code with a couple of small `match`es, and for some reason they weren't being optimized to a lookup table. I wrote a more minimal reproduction of that issue in C++ and eventually found the Clang regression. Since Rust also uses LLVM, `match`es suffer from the same regression (depending on which Rust version you're using).
qalmakka|1 year ago
nicula|1 year ago
Yes.
> If you run LLVM 18's `opt` on bytecode generated by Clang 19 and then compile it, does it also generate the same bad assembly?
No. If you pass the LLVM IR bitcode generated by Clang 18 to Clang 19, then the assembly is good.
I called it a 'Clang regression' in the sense that the way in which I discovered and tested this difference in performance was via Clang. So from a typical user's perspective (who doesn't care about the inner workings and distinct components of Clang), this is a 'Clang regression'.