Retrocomputing FPGA work is a fun diversion from normal software, enough that my brain was convinced they weren't related (had a bit of a mental block for software). Over the course of a year, I went from knowing basically nothing to releasing 3 different FPGA emulation cores of my own for multiple platforms, along with releasing something like 5 ports (which is not necessarily trivial, particularly for a beginner) of existing cores to the Analogue Pocket.
It has been a very fun experience, and I've found it to be extremely addicting. It helps that there's a fairly tight-knit community very interested in furthering the development of FPGA hardware preservation, so people are very willing to donate, test, and contribute feedback, which is a great feeling for open source work.
> Retrocomputing FPGA work is a fun diversion from normal software, enough that my brain was convinced they weren't related (had a bit of a mental block for software).
That's awesome! I feel this, I've had a software development mental block for a number of years now. I just don't find modern software all that interesting anymore. Lost in mountains of model mapping, layers of terrible abstraction, that never ending package update grind (shudders), bad APIs, closed won't fix works as designed bugs (sigh), truly insane complexity and so many many things that are simply outside of my control.
It's my interest in related, but different areas that has kept me engaged recently: micro electronics, 3d printing, and home automation. They exercise enough of my decades of programming experience to get that fix, but the projects are small and focused on solving very concrete problems instead of moving a decimal point on some spreadsheet somewhere completely disconnected from me. It's great when you make something for a friend and you can see the joy in their eyes as they realize how much this thing you made helps them.
Sounds like FPGAs are doing that for you and that makes me happy!
Although operationally there's not much of a difference between a cycle-accurate FPGA implementation and a cycle-accurate software implementation (especially on a board that you can plug into a hardware CPU socket[1]) the FPGA implementation is interesting to me because it seems closer to the original gate-level implementation in hardware, and because it seems more tangible. Of course a custom silicon implementation (such as a recent HN comment about a tiny tapeout reimplementation of the PDP-8[1]) seems even more real and exciting, even if the cycle/signal timing is the same. Part of it may be that the custom silicon implementation is a self-contained reimplementation rather than an emulation based on pre-existing, complex components.
Thanks for your work. It’s on my list sometime to look into porting an open source GBC core to the Pocket (and add a patch or three on top) - I want to play multiworld randomizers on it!
I agree strongly with the sentiment of this article, the author missed the fully open source programmable systems like the Lattice ECP5 and 40K FPGAs. There is something magical about using the LiteX "make me an SOC out of this board" system.
As the ACM Digital has gone open access I can recommend this Jan Gray paper, "Hands-on Computer Architecture – Teaching Processor and Integrated Systems Design with FPGAs"[1] There are different opinions on whether or not understanding computer architecture makes you a better developer or not (I tend to think it does) but its a really amazing time to be able to explore these concepts without the need to be at a company or in an University setting.
LiteX is very neat, but at the same time is a massive pain as soon as you move past building an example SoC. I've been spending a lot of time on this recently.
I wanted to learn how to use Ghidra for reverse engineering binaries. The thing that allowed me to really improve with it was using it to analyze old video games that I played as a kid in the 1990s. Finding previously-unknown cheat codes, coming up with improvement patches, and figuring out how they work is very motivating!
I'm happy that these retro hardware projects are working out; I've liked seeing people test out what I've found in Ghidra on real systems.
Why is the MiSTer listed as "used to perfectly emulate and/or upscale analog signals?"
MiSTer doesn't just concern itself with analog signals, it simulates the entire system and outputs the original analog signal or can upscale it for digital output on its own. This description could make somebody think it's just a scaler.
Both. MiSTer can output the original signal through the VGA output or through the HDMI digital output (e.g. and using a HDMI to VGA converter), and also it is posible to configure the HDMI output as a scaler (with different modes, from the lowest latency modes using one line as buffer, to the more complex changing the frequency and/or adding low-latency CRT/LCD/etc filters). Effects can be added to the analog output, e.g. scan doubler. Please consider checking the documentation, it is an amazing project.
I can see how an FGPA can perfectly emulate the logic of an older chip--but the actual physical layout is different, right? Are there any timing issues that result from this?
The reality is that the vast majority of these FPGA-based clones don't actually perfectly emulate the logic. They're using the same reverse engineering techniques the traditional emulator developers used and sometimes even the same community documentation. The results are often quite good, but they're making a new implementation that matches the observed behavior of the original system to the best of their abilities.
Now there are some exceptions. Nuked MD FPGA[0] is a recent example of an FPGA recreation that is a fairly direct translation of the original logic using silicon die analysis. In this case, the logic is basically identical, but as you guessed the physical layout is different. Generally speaking, you write FPGA "gateware" in a language like Verilog or VHDL. These don't intrinsically have any information about the physical layout of the logic which is handled by the toolchain instead. As wmf says, this is generally not a problem most of the time. For synchronous logic, either the total propagation delay is small enough for a single cycle or it isn't. The toolchain will estimate this delay and report whether you met timing or not for the configured clockspeed.
Not everything you can do in silicon translates well to FPGAs (both clock edges is also generally not well supported for instance), but for the most part these things are easy enough to work around.
Except for the rare cases where a chip can be decapped, turned into a netlist and painstakingly translated 1:1 into logic primitives (which itself is usually impossible without some fudging), all re-implementations are exactly that.
You can still do higher level stuff in an FPGA. Maybe you don't actually care how the sprite hardware really works, and you just make your own that mostly works the same. Maybe you don't even care that a PPU is split into 3 chips, you make yours without regard for the physical delineation of the original.
There are some cores out there like this - written off of software emulators with minimal original research. It is many times possible to get something that somehow plays games even while being "inaccurate", but yields little benefit over software emulation besides lower latency from controller inputs.
Higher-quality cores always involve original research. It is rare that documentation already exists at the detailed level you need. The best people in the field blackbox the original chips and, based on years of experience and knowledge of sussing out behavior by thinking like the original chips designers, can make a functionally and timing accurate model that can operate in lockstep with the real chip, cycle for cycle the same data on the bus. This is the sweet spot for FPGA implementations, but also requires a lot of skill and expertise.
At the extreme end is stuff like NukeMD and the visual 6502/68k projects. The logic is cloned at the gate level without any guessing. Still, some changes are necessary. For example, chips with internal tristate busses are impossible to do on FPGA fabric. Clocking in both sides of the clock. Using multiple phase clocks. Using dynamic logic. And so on. These implementations are usually much less space-efficient than the paragraph above, but offer the highest accuracy.
An FPGA can enable these things but it doesn't magically happen. A MisterFPGA can emulate a PC with the ao486 core, which is an achievement, but the ao486 core doesn't precisely mimic the machinery or timing of the 80486 or any other CPU.
In normal synchronous logic if something takes N cycles it takes N cycles; sub-cycle timing differences don't matter. If the original chip is doing weird asynchronous stuff it could be hard to properly emulate.
You can do any combinational and sequential logic you want without having to use a forest of discrete 74xx ICs. If you've ever scratched an itch with redstone in minecraft or taken a digital logic course you might find it quite rewarding to interpret a problem into a binary problem then implement an optimal solution with k maps and implication tables.
Practically speaking they're useful for hobbyists who want to push beyond the capabilities of the I/O and crunching capabilities of microcontrollers. In many cases the 32-bit micros around today are good enough, but I think it's satisfying to work with bare logic elements.
If you have a couple of years and a friend, it's possible to build an entire system, from designing the architecture through writing the apps to run on it: http://www.projectoberon.net
My undergrad computer architecture class we built a processor over the semester with an FPGA. So writing all of the different math circuits and a logic circuit and then combining them together.
Pretty much, the DE0 which most of the retro fpga computers rely on, was cheap, but it is almost impossible to find at decent prices anymore, even the non-mainstream FPGA boards (WaveShare, etc) have jumped in price massively, even though they're really pretty useless for retro computing use unless you're willing to reinvent the entire universe to bake an apple pie.
Whether or not the 2023 entry price of a few hundred US dollars plus shipping (that is, a $225 DE10Nano board + $65 RAM module, with basic analog outs and USB able to be added via ~$20 generic dongles) is a ripoff or not, for a system that uses FPGA tech to simulate/emulate a large range of old computers and consoles to a leading standard, including a long (ever-growing) and impressive feature list of options, and a very stable longterm front-end ...is quite subjective.
I think it's still exceedingly good value. But certainly not the only, or outright cheapest, option.
That's odd. Whatever you could get 5-7 years ago is still available (except better) for pretty much the same or lower prices and with a lot more examples to start from.
[+] [-] agg23|2 years ago|reply
It has been a very fun experience, and I've found it to be extremely addicting. It helps that there's a fairly tight-knit community very interested in furthering the development of FPGA hardware preservation, so people are very willing to donate, test, and contribute feedback, which is a great feeling for open source work.
[+] [-] bmurphy1976|2 years ago|reply
That's awesome! I feel this, I've had a software development mental block for a number of years now. I just don't find modern software all that interesting anymore. Lost in mountains of model mapping, layers of terrible abstraction, that never ending package update grind (shudders), bad APIs, closed won't fix works as designed bugs (sigh), truly insane complexity and so many many things that are simply outside of my control.
It's my interest in related, but different areas that has kept me engaged recently: micro electronics, 3d printing, and home automation. They exercise enough of my decades of programming experience to get that fix, but the projects are small and focused on solving very concrete problems instead of moving a decimal point on some spreadsheet somewhere completely disconnected from me. It's great when you make something for a friend and you can see the joy in their eyes as they realize how much this thing you made helps them.
Sounds like FPGAs are doing that for you and that makes me happy!
[+] [-] musicale|2 years ago|reply
[1] https://microcorelabs.wordpress.com
[2] https://news.ycombinator.com/item?id=38416886
[+] [-] captaincaveman|2 years ago|reply
[+] [-] a_t48|2 years ago|reply
[+] [-] terrycody|2 years ago|reply
[+] [-] kjs3|2 years ago|reply
[+] [-] ChuckMcM|2 years ago|reply
As the ACM Digital has gone open access I can recommend this Jan Gray paper, "Hands-on Computer Architecture – Teaching Processor and Integrated Systems Design with FPGAs"[1] There are different opinions on whether or not understanding computer architecture makes you a better developer or not (I tend to think it does) but its a really amazing time to be able to explore these concepts without the need to be at a company or in an University setting.
[1] https://dl.acm.org/doi/pdf/10.1145/1275240.1275262
[+] [-] agg23|2 years ago|reply
[+] [-] bbayles|2 years ago|reply
I'm happy that these retro hardware projects are working out; I've liked seeing people test out what I've found in Ghidra on real systems.
[+] [-] FirmwareBurner|2 years ago|reply
Do you have any findings to share?
[+] [-] deepthaw|2 years ago|reply
MiSTer doesn't just concern itself with analog signals, it simulates the entire system and outputs the original analog signal or can upscale it for digital output on its own. This description could make somebody think it's just a scaler.
[+] [-] faragon|2 years ago|reply
[+] [-] jxdxbx|2 years ago|reply
[+] [-] mikepavone|2 years ago|reply
Now there are some exceptions. Nuked MD FPGA[0] is a recent example of an FPGA recreation that is a fairly direct translation of the original logic using silicon die analysis. In this case, the logic is basically identical, but as you guessed the physical layout is different. Generally speaking, you write FPGA "gateware" in a language like Verilog or VHDL. These don't intrinsically have any information about the physical layout of the logic which is handled by the toolchain instead. As wmf says, this is generally not a problem most of the time. For synchronous logic, either the total propagation delay is small enough for a single cycle or it isn't. The toolchain will estimate this delay and report whether you met timing or not for the configured clockspeed.
Not everything you can do in silicon translates well to FPGAs (both clock edges is also generally not well supported for instance), but for the most part these things are easy enough to work around.
[0] https://github.com/nukeykt/Nuked-MD-FPGA
[+] [-] mips_r4300i|2 years ago|reply
You can still do higher level stuff in an FPGA. Maybe you don't actually care how the sprite hardware really works, and you just make your own that mostly works the same. Maybe you don't even care that a PPU is split into 3 chips, you make yours without regard for the physical delineation of the original. There are some cores out there like this - written off of software emulators with minimal original research. It is many times possible to get something that somehow plays games even while being "inaccurate", but yields little benefit over software emulation besides lower latency from controller inputs.
Higher-quality cores always involve original research. It is rare that documentation already exists at the detailed level you need. The best people in the field blackbox the original chips and, based on years of experience and knowledge of sussing out behavior by thinking like the original chips designers, can make a functionally and timing accurate model that can operate in lockstep with the real chip, cycle for cycle the same data on the bus. This is the sweet spot for FPGA implementations, but also requires a lot of skill and expertise.
At the extreme end is stuff like NukeMD and the visual 6502/68k projects. The logic is cloned at the gate level without any guessing. Still, some changes are necessary. For example, chips with internal tristate busses are impossible to do on FPGA fabric. Clocking in both sides of the clock. Using multiple phase clocks. Using dynamic logic. And so on. These implementations are usually much less space-efficient than the paragraph above, but offer the highest accuracy.
[+] [-] andrewf|2 years ago|reply
[+] [-] wmf|2 years ago|reply
[+] [-] eachro|2 years ago|reply
[+] [-] _moof|2 years ago|reply
[+] [-] willis936|2 years ago|reply
Practically speaking they're useful for hobbyists who want to push beyond the capabilities of the I/O and crunching capabilities of microcontrollers. In many cases the 32-bit micros around today are good enough, but I think it's satisfying to work with bare logic elements.
[+] [-] 082349872349872|2 years ago|reply
(they provide Verilog on this website, but Wirth himself has an HDL of —of course— his own design: https://people.inf.ethz.ch/wirth/Lola/index.html )
NB. Risc5 != RiscV
[+] [-] ecshafer|2 years ago|reply
[+] [-] xattt|2 years ago|reply
[+] [-] NikkiA|2 years ago|reply
[+] [-] crtified|2 years ago|reply
Whether or not the 2023 entry price of a few hundred US dollars plus shipping (that is, a $225 DE10Nano board + $65 RAM module, with basic analog outs and USB able to be added via ~$20 generic dongles) is a ripoff or not, for a system that uses FPGA tech to simulate/emulate a large range of old computers and consoles to a leading standard, including a long (ever-growing) and impressive feature list of options, and a very stable longterm front-end ...is quite subjective.
I think it's still exceedingly good value. But certainly not the only, or outright cheapest, option.
[+] [-] ted_dunning|2 years ago|reply
How is that world a ripoff?
[+] [-] monocasa|2 years ago|reply