(no title)
prof_hobart | 9 years ago
Back in the day when both memory and clock cycles were very precious, it wasn't unknown to use self-modifying code as a performance optimisation trick. I did it at least once in the late 80s, when I was working on comms software that had to be as fast as possible in order to avoid missing incoming data.
There was a check that needed to be done on every byte - I think it was whether I was now processing graphics characters or not - but the check was taking valuable time, and the value didn't change very often.
So the most efficient way I found to do it was to wait until I got a "switch to/from graphics" byte in the input stream and then update the instruction at a given location to either be "unconditional jump to graphics routine" or a "no operation (NOP)", which passed straight through to the routine for normal characters.
It was a horrible hack, but it worked.
Thankfully, I've not felt the need to even consider this approach for the past 20 years.
userbinator|9 years ago
https://news.ycombinator.com/item?id=12485205
I don't think SMC has ever been "relatively mainstream", at least after HLLs gained popularity over Asm. But in Asm, it still has its uses where a full JIT would be far too much overhead.
pklausler|9 years ago
jakub_h|9 years ago
That's because of all the branch predictors, probably. ;)