(no title)
asrp | 5 years ago
> An optimizing linter has the problem of being destructive. It goes like this:
> The programmer will write his or her program in a readable way. They'll run it through the compiler, which points out that something can be optimized, the programmer—having already gone through the process of writing the first implementation with all its constraints and other ins and outs fresh in their mind—will slap their head and mutter "of course!", and then replace the original naive implementation with one based on the notes the compiler has given. Chances are high that the result will be less comprehensible to other programmers who come along—or even to the same programmer revisiting their own code 6 months later.
Also a data point and word of warning about (lack of) optimization. My own projects (one of which was mostly hand-written in x86 assembly) have been pretty heavily stalled from speed issues, that sent me on significant detours. Since you are working with your own compiler/interpreter to implement your levels, you are directly affected by their compilation speeds as you iterate. Even with modern hardware, they can quickly become too slow to be even usable.
This is unfortunately another consequence of having too much black magic in (C) compilers. So we get the wrong intuition about how fast computers are.
akkartik|5 years ago
asrp|4 years ago
I wish there was some framework for me to add optimizations as I go along, especially if there could be some speed gauranttee. Though in my case, I'd like to also not lose the relation to the original source (like C does when values are optimized out).