It depends on the context of the code, its after the optimization pass but before emission of LLVM IR, But in most cases it shouldn't be super expensive, the conversion itself happens at compile time, but the usage is at run-time, it can be turned off entirely, but we need testing to see the actual performance implications. Right now the AMT is written down in an internal MD document, we've figured out the whole theory bit, the implantation comes after we written the error-handler (for compiler side error messages), lexer, pre-processor, parser, symbol table, Optimizer passes, Borrow Checking IR (haven't decided a proper name on it yet for now its just BCIR), and then the AMT, followed by the Emitters. Then the rest of the features follow suite. but we will test each stage individually once their complete and have a proper dataset of numbers to see performance memory usage and other metrics, with proper real world cases (it very well may be slower or not, we don't know yet in theory, there's no immense performance implications at runtime other then a a couple ...ns extra).
ze7111|8 months ago