SNK were the gods of the 68000. I still remember back in the day getting a bug report on my 68000 emulator:
When playing King of Fighters, the time counter would go down to 0 and then wrap around to 99, effectively preventing the round from ending.
Eventually I tracked it down to the behavior of SBCD (Subtract Binary Coded Decimal): Internally, the chip actually does update the overflow flag reliably (it's marked as undefined in the docs). SNK was checking the V flag and ending the round when it got set.
SBCD was an old throwback instruction that was hardly used anymore, and the register variant took 6 cycles to complete (vs 4 for binary subtraction).
HOWEVER... For displaying the timer counter on-screen, they saved a ton of cycles with this scheme because extracting the digits from a BCD value is a simple shift by 4 bits (6 cycles) rather than a VERY expensive divide (140 cycles).
Interestingly, gcc-amigaos-gcc 6.5 uses dbra without having to jump through any of those contortions, as long as the optimisation level is set to at least -O1:
One thing that I heard from folks who do development for retro Atari platforms is that the 68k support in GCC has been getting worse as time has gone on, and it's very difficult to get the maintainers to accept patches to improve it, since 68k is not exactly widely used at this point.
Specifically, I heard that the 68k backend keeps getting worse, whilst the front-end keeps getting better. So choosing a GCC version is a case of examining the tradeoffs between getting better AST-level optimisations from a newer version, or more optimised assembly language output from an earlier version.
I imagine GCC 6.5 probably has a backend that makes better use of the 68k chip than the GCC 11.4 that ngdevkit uses (such as knowing when to use dbra) but is probably worse in other ways due to an older and less capable frontend.
I tried this with the old SAS/C Amiga compiler. It put addresses in A0 and then moved value into (A0) on next instruction, so the setup part was a bit more inefficient. And refused to use "dbra" no matter what I tried.
> Note how gcc is smart enough to detect that the expression ((0xc<<12) | 0xafe) is constant, so it can skip shifts and bitwise assembly operations and just emit the resulting immediate value at line 14. The same goes for the loop condition, gcc emits constant 1280 at line 10 in place of the multiplication 40x32. A classic compiler optimization called constant folding, but nice nonetheless.
This is actually required rather than an optimisation for any C compiler, from early on, as C semantically allows constant expressions rather than just constants to be used for statically allocated sizes, etc. While the 'optimisation' is not guaranteed you'll see even on -O0 the constant was evaluated at compile-time, as it's harder to not fold constant expressions sometimes than it is to just always fold them for the already required constant expression features.
The step with declaring hw registers in assembly reminds me how assignment of value to pointer is IIRC at best implementation defined, and at worst UB, and playing around with volatile saves you not from zealous optimizer.
Arguably every hardware register should be declared that way as a symbol
kstenerud|5 months ago
When playing King of Fighters, the time counter would go down to 0 and then wrap around to 99, effectively preventing the round from ending.
Eventually I tracked it down to the behavior of SBCD (Subtract Binary Coded Decimal): Internally, the chip actually does update the overflow flag reliably (it's marked as undefined in the docs). SNK was checking the V flag and ending the round when it got set.
https://github.com/kstenerud/Musashi/blob/master/m68k_in.c#L...
SBCD was an old throwback instruction that was hardly used anymore, and the register variant took 6 cycles to complete (vs 4 for binary subtraction).
HOWEVER... For displaying the timer counter on-screen, they saved a ton of cycles with this scheme because extracting the digits from a BCD value is a simple shift by 4 bits (6 cycles) rather than a VERY expensive divide (140 cycles).
kevin_thibedeau|5 months ago
robinsonb5|5 months ago
chris_j|5 months ago
Specifically, I heard that the 68k backend keeps getting worse, whilst the front-end keeps getting better. So choosing a GCC version is a case of examining the tradeoffs between getting better AST-level optimisations from a newer version, or more optimised assembly language output from an earlier version.
I imagine GCC 6.5 probably has a backend that makes better use of the 68k chip than the GCC 11.4 that ngdevkit uses (such as knowing when to use dbra) but is probably worse in other ways due to an older and less capable frontend.
dlundqvist|5 months ago
odipar|5 months ago
veltas|5 months ago
This is actually required rather than an optimisation for any C compiler, from early on, as C semantically allows constant expressions rather than just constants to be used for statically allocated sizes, etc. While the 'optimisation' is not guaranteed you'll see even on -O0 the constant was evaluated at compile-time, as it's harder to not fold constant expressions sometimes than it is to just always fold them for the already required constant expression features.
dmitrygr|5 months ago
p_l|5 months ago
Arguably every hardware register should be declared that way as a symbol
pjmlp|5 months ago
jcmeyrignac|5 months ago
allenrb|5 months ago
;-)
commandlinefan|5 months ago
pjmlp|5 months ago