top | item 45321938

(no title)

TapamN | 5 months ago

I created a (currently not publicly released) driver for the 1998 Sega Dreamcast's video hardware from scratch. It supports additional features over the driver in the open source homebrew OS, KallistiOS (KOS), like better render-to-texture support (the KOS driver only supports rendering to framebuffer sized textures), tile multipass (which allows for accumulation buffer style effects or soft shadows), and dynamically toggling anti-aliasing on the fly (with KOS it's fixed after init). Some screenshots of my driver can do are here: https://imgur.com/a/DyaqzZD

I used publicly available documentation (like https://www.ludd.ltu.se/~jlo/dc/ and the now defunct dcdev Yahoo Group), looked at the existing open source KOS driver, and looked at the source for Dreamcast emulators to figure out how things worked.

The GPU in the Dreamcast is a bit more complicated than PSX/PS2/GC since it doesn't accept polygons and draw them directly to the framebuffer. It's a tile-based deferred renderer, like many mobile GPUs, so it instead writes the polygons to a buffer in video RAM, then later walks through the polygons and renders the scene in tiles to an on-chip 32x32 pixel buffer, which finally gets written to RAM once.

This allows the Dreamcast to have a depth-only fillrate close to the 360 and PS3 (DC is 3.2 GPix/s vs 360/PS3 4.0 GPix/s), and it basically preforms a depth-only prepass to avoid doing texture reads for obscured texels. It can also preform per-pixel transparency sorting (order-independent transparency) with effectively no limit to the number of overlapping pixels (but the sorter is O(n^2), so a lot of overlap can become very expensive).

To get a working driver for the Dreamcast, you have to set up some structures in video RAM so that the hardware knows what polygons are in what tile. Another thing the driver needs to do is coordinate the part of the hardware that takes polygon commands and writes them to video RAM, and the part that actually does rendering. You typically double buffer the polygons, so that while the hardware is rendering one frame, user code can submit polygons in parallel for the next frame to another buffer.

My driver started as just code in "int main()" to get stuff on the screen, then I gradually separated stuff out from that into a real driver.

discuss

order

spicyjpeg|5 months ago

If anybody here wants to learn more about console graphics specifically, I think the original PlayStation is a good starting point since it's basically the earliest and simplest 3D-capable (though it would be more correct to say triangle capable, as it does not take Z coordinates at all!) GPU that still bears a vague resemblance to modern shader-based graphics pipelines. A few years ago I wrote a handful of bare metal C examples demonstrating its usage at the register level [1]; if it weren't for my lack of spare time over the last year I would have added more examples covering other parts of the console's hardware as well.

[1]: https://github.com/spicyjpeg/ps1-bare-metal

ferguess_k|5 months ago

Thanks for sharing! Pikuma also has a PS1 graphic programming course that I plan to take in the future.