top | item 42816928

(no title)

negate32 | 1 year ago

One of the big things which makes LLVM very slow is the abundance of passes, I believe I last counted 75 for an unoptimized function? My solution for this is writing more combined passes, due to the SoN design I'm combining a lot of things which are traditionally separate passes. For instance, my equivalent to "SimplifyCFG" "GVNPass", "InstCombine", "EarlyCSEPass" and "JumpThreadingPass" is one combined peephole solver which runs faster than all of these passes separately. This is for two main reasons:

* Less cache churn, I'm doing more work per cacheline loaded in (rather than rescanning the same function over and over again).

* Combining mutually beneficial optimizations can lead to less phase ordering problems and a better solve (this is why SCCP is better than DCE and constant prop separately).

In a few years when TB is mature, I'd wager I'll have maybe 10-20 real passes for the "-O2 competitive" optimizer pipeline because in practice there's no need to have so many passes.

discuss

order

gergo_barany|1 year ago

If this is one of the main things you want to demonstrate, wouldn't it be better to focus on this one goal first, instead of the whole pipeline from a C preprocessor to directly linked executables?

Essentially, if you say that LLVM's mid-end in particular is slow, I would expect you to present a drop-in replacement for LLVM's mid-end opt tool. You could leave C-to-LLVM-bitcode to Clang. You could leave LLVM-bitcode-to-machine-code to llc. Just like opt, take unoptimized LLVM bitcode as input and produce optimized LLVM bitcode as output. You would get a much fairer apples to apples comparison of both code quality and mid-end compiler speed (your website already mentions that you aren't measuring apples-to-apples times), and you would duplicate much less work.

Alternatively, look into existing Sea of Nodes compilers and see if you can build your demonstrator into them. LibFIRM is such a C compiler: https://libfirm.github.io/ There may be others.

It just seems like you are mixing two things: On the one hand, you are making some very concrete technical statements that integrated optimizations are good and the Sea of Nodes is a great way to get there. A credible demonstrator for this would be very welcome and of great interest to the wider compiler community. On the other hand, you are doing a rite-of-passage project of writing a self-hosting C compiler. I don't mean this unkindly, but that part is less interesting for anyone besides yourself.

EDIT: I also wanted to mention that the approach I suggest is exactly how LLVM became well-known and popular. It was not because of Clang; Clang did not even exist for the first eight years or so of LLVM's existence. Instead, LLVM focused on what it wanted to demonstrate: a different approach to mid-end optimizations compared to the state of the art at the time. Parsing C code was not part of that, so LLVM left that to an external component (which happened to be GCC).

DannyBee|1 year ago

I'm very confused what magic you believe will achieve what has not so far been achieved.

I'm also confused why you believe LLVM didn't start out the exact same way?

I say this as one of the main people responsible for working on combined pass replacements in both GCC and LLVM for things that were reasonable to be combined.

I actually love destroying lots of passes in favor of better combined ones. In that sense, i'm the biggest fan in the world of these kinds of efforts.

But the reason these passes exist is not cruft, or 20 years of laziness, - it's because it's very hard to replace them with combined algorithms that are both faster, and achieve the same results.

What exactly do you plan on replacing GVN + Simplify CFG + Jump Threading + correlated value prop with?

It took years of cherrypicking research and significant algorithm development to develop algorithms for this that had reasonable timebounds, were faster, and could do better than all of them combined. The algorithm is quite complex, and it's hard to prove it terminates in all cases, actually. The number of people who understand it is pretty small because of the complexity. That's before you get to applied engineering of putting it in a production compiler.

These days, as the person originally responsible for it, i'd say it's not better enough for the complexity, even though it is definitely faster and more complete and would let you replace these passes.

Meanwhile, you seem to think you will mature everything and get there in a few years.

I could believe you will achieve some percent of GCC or LLVM's performance, but that's not the reason these passes exist. They exist because that is what it reasonably takes to achieve LLVM (and GCC's) performance across a wide variety of code, for some acceptable level of algorithm complexity and maintainability.

So if you told me you were only shooting for 80% across some particular subset code, i could believe 10-20 passes. If you told me you were going to build a different product that targets a different audience, or in a different way, i could maybe believe it.

But for what you say here, I think you vastly underestimate the difficult and vastly underappreciate the effort that goes into these things. This is hundreds of very smart people working on things for decades. It's one thing to have a healthy disrespect for the impossible. It's another to think you will, in a few years, outdo hundreds of smart, capable engineers on pure technical achievement.

That strikes me as somewhere between hubris and insanity.

People also pretty much stopped building and publishing general purpose compiler optimization algorithms a decade ago, moving towards much more specialized algorithms and ML focused things and whatnot.

This is because in large part, there isn't a lot left worth doing.

So unless you've got magic bullets nobody else has, either you won't achieve the same performance level, or it will be slow, or it will take you a significant amount of algorithm development and engineering well beyond "a few years".

I say this not to dissuade you, but to temper your apparent expectations and view.

Honestly - I wish you the best of luck, and hope you succeed at it.

rendaw|1 year ago

Critical responses from people in the industry are what I come here to read. I have no doubt of your credentials and I'm not at all qualified to judge the technical details here, but your reply comes off as an emotional kneejerk.

I read it a few times and as best I can get this is what you're saying:

- You came up with a similar combined replacement pass for LLVM based on years of personal and external research.

- It's faster and has more functionality.

- It's very complex and you're not comfortable that it's possible to achieve various reliability guarantees, so it is unreleased

- Therefore (?) you think the Tilde author also couldn't possibly succeed

AFAICT you also believe that the Tilde author hasn't completed their replacement pass. From their post my take was that it was already done. The part that will mature is additional passes, or maybe optimizations/bugfixes, but not the MVP development.

Your main arguments seem to be probability and appeal to authority (external research, assigned responsibility, industry association). Pretty much all projects and startups fail, but it's because people attempt them that some succeed.

Is the author betting their career on this? Why do their expectations need to be tempered?

I'd be interested in hearing concrete criticisms of their algorithms (have you looked at the code?) or oversights in the design. Maybe the author would too! If you let the author know, maybe you could think of a solution to reduce the complexity or improve the guarantees together.