Well, optimizing similar to how Rust guides the developer to write more optimized code. Comparing them is comparing apples to oranges however. There are many ways to get there. With Z3 you can do symbolic testing and it’s very much about defining if the expected output of the constructed IR is the same as the real output. When it comes to the IR I like to think of Z3 as almost an abstract assembler. Usually a separate language altogether is assembled via the IR that’s produced based on the rules given to SMT. LLVM optimization occurs after it’s been “assembled” but it’s the rules in place given to SMT which guides the developer to write code which can be optimized better by LLVM. Otherwise you could just write anything and hope that LLVM optimizes it. Personally, I wouldn’t want an extra application that tries to silently fix things beyond what LLVM does with IR because it would feel compulsory for everything. There should be an end goal for what it’s optimizing and why in the form of a proof or lemma for it to make sense in the context of Z3. So yes, definitely possible but probably more useful if you are building a new compiler for a new platform rather than trying to do better than an existing one.
mhh__|5 years ago
You can already get some serious performance increases by giving the compiler information - if I assert two arrays have equal length in D, I can then add an inline statement equivalent to GCC's builtin_unreachable and the compiler does s surprisingly good job of eliding length checks that it normally can't assume.