(no title)
asrp | 5 years ago
I'm sure there are many competing constraints so definitely don't do it because I'm suggesting this on a whim. :) My reasoning is that as a human reader, the comment is the more readable part, so I'd want to see it first. And for a computer, it probably doesn't care if the op code appears first or not.
> You probably don't want to understand Haskell's loop fusion by comparing source and generated code.
Indeed. But even though C and Haskell are very different, I think they share a common philosophy about compilation where you can basically do whatever you want as long as it still produces the same result.
I vaguely remember looking at Python generate bytecode (with `dis.dis`) and seeing it wasn't too bad. I haven't tried it on a larger program though.
There's tcc (and more recently chibicc that I haven't had a chance to check out yet) that you're probably already aware of. Is the generated output still pretty bad.
I'll also throw my own attempt in the ring
- High level https://github.com/asrp/flpc/blob/master/lib/stage0.flpc - Low level (up to line 45) https://github.com/asrp/flpc/blob/master/precompiled/self.f
even though it's not quite optimized for this purpose and the code itself is still a bit unclean. If there was a syntax highlighter for the low level language, I'd probably highlight "[", "]" and "bind:" as a start. I can try to clarify any obscure syntax or primitive.
Some more general ideas to get aroud the issue. - Invoke optimization only when asked specifically (and apply the optimization locally). That is, optimization would need at least additional syntax in the language. - Explicitly track correspondance between source and target (at the character or token level) and also do this in each optimization pass. Maybe even keep the intermediate values of each pass so you can browse through it like a stack trace.
> In my mind there's an idea maze where there are 3 major possibilities for improving the future of software:
I guess I'm trying another route even though I don't know if it fits the definition of improving the future of software.
d) Have programmers make their own compiler/interpreter and language by giving them the tools and knowledge to do that (more) easily.
This would (hopefully) avoid the black box/magic issue since the programmer would know the details of the inner workings by virtue of having written it. Though I'm most definitely very far from the goal and the questions can be asked about how to improve their target language.
akkartik|5 years ago
> My reasoning is that as a human reader, the comment is the more readable part, so I'd want to see it first. And for a computer, it probably doesn't care if the op code appears first or not.
Yeah, for sure. One rebuttal that comes to mind is the dictum, "don't get suckered by comments, debug code." Comments are useful, but too much emphasis on them has led to dark times in my past :)
Still very worth considering.
asrp|5 years ago
> An optimizing linter has the problem of being destructive. It goes like this:
> The programmer will write his or her program in a readable way. They'll run it through the compiler, which points out that something can be optimized, the programmer—having already gone through the process of writing the first implementation with all its constraints and other ins and outs fresh in their mind—will slap their head and mutter "of course!", and then replace the original naive implementation with one based on the notes the compiler has given. Chances are high that the result will be less comprehensible to other programmers who come along—or even to the same programmer revisiting their own code 6 months later.
Also a data point and word of warning about (lack of) optimization. My own projects (one of which was mostly hand-written in x86 assembly) have been pretty heavily stalled from speed issues, that sent me on significant detours. Since you are working with your own compiler/interpreter to implement your levels, you are directly affected by their compilation speeds as you iterate. Even with modern hardware, they can quickly become too slow to be even usable.
This is unfortunately another consequence of having too much black magic in (C) compilers. So we get the wrong intuition about how fast computers are.