top | item 17833829

A2Z: A computer designed and built from scratch. Custom CPU on FPGA (2017)

262 points| F4HDK | 7 years ago |hackaday.io | reply

50 comments

order
[+] QuadrupleA|7 years ago|reply
Cool project! It's interesting the dichotomy in tech communities between the 'minimalists' on one hand who love to get under the hood, work their way down to the bare metal and understand how everything works, and the opposite trend, building seemingly simple web apps that sit on top of 1,000 libraries and frameworks, pack their huge dependency chains into Docker containers distributed onto clusters (probably because the app runs so slow on a single VM), etc. I wonder if it's two fundamentally different kinds of personalities at work.
[+] freehunter|7 years ago|reply
The first person you describe has to have almost endless amounts time and/or technical interest available to them. The second person you describe has a job to get done as quickly as possible and move on to the next job.

The whole reason libraries and frameworks were created is so everyone doesn't have to dig down to the bare metal to get a task completed. If my manager asked me to provision a server to run an application on and I sat down and built my own hardware from scratch and then wrote my own OS rather than clicking a single button in VMware, I'd be fired pretty quickly.

[+] F4HDK|7 years ago|reply
You are 100% right. This project fits 100% with my personality. I like to understand in details what I'm working on, even if I need to "loose" time exploring things not directly linked to what I'm supposed to do. And I also like systems that are designed at necessary level.
[+] zackbrown|7 years ago|reply
> I wonder if it's two fundamentally different kinds of personalities at work.

Not two different kinds of personalities, just two different kinds of work. Many people find ways to do both.

The minimalist role is better for learning and building for the art of it — when the purpose of the making is the making itself. The article is a great example of this.

The other role is when that learning needs to be applied to a further end. E.g. when shipping product, code is a "means," and pre-built libraries and layers of abstraction are leverage.

[+] Someone|7 years ago|reply
Give it a few years, and we will see the first ‘compile npm to FPGA’ project that still uses a thousand libraries, but does away with the CPU, the OS, the docker container, the web server, etc.

(if it doesn’t exist already. The best I can google is https://github.com/hamsternz/FPGA_Webserver, which is incomplete, misses the ‘npm to’ part, and seems abandoned)

[+] dscpls|7 years ago|reply
I don't think it's necessarily two fundamentally different personalities (unless I have multiple...which I might...)

I grew up taking things apart, and I loved the courses where we build logic gates or modified compiler or interpreter code.

I now build things on the shoulders of giants.

But when I need to, I know I can dive to the deepest levels to debug something, or I can write or customise any part of the stack.

It sure is a mindset shift and a context switch. I consider choosing the right moment to switch approach to be one of the most important and hardest tasks of developing software systems.

[+] abfan1127|7 years ago|reply
I'm a bit of an odd duck because I do ASIC design for work (custom Digital Audio Codecs) and do web development for fun/side work. So I do the bare metal and the high level.
[+] blacksmith_tb|7 years ago|reply
It reminds me a little of people who restore old cars, yes, there's something great about it, but no, I'm not interested in driving a Model-T around myself...
[+] shacharz|7 years ago|reply
There's the great nand2tetris course [1] - teaches step by step how to build a computer from the simplest logical gates, using hdl, to building your own ALU, computer, and later on operating system, etc'.

[1] https://www.nand2tetris.org/

[+] whitten|7 years ago|reply
To quote the author (F4HDK) who designed everything from hardware to software, compiler,loader,assembler etc: It is a design that came from my imagination, not from existing CPUs. It is a RISC design, without microcode, and instruction are very low level. Therefore, the executable code is much bigger than for CISC CPUs. But the CPU core itself is very simple: especially instruction decoding is very very simple. It is also slower than CISC because the CPU takes lots of time just for reading instructions from the bus (and of course there is no execution pipeline)... But it works!
[+] ChuckMcM|7 years ago|reply
Nice Job! Don't be discouraged when people say "What a waste of time, why didn't you just use an Arduino?"

When I went to college I had a choice of majoring in Computer Science or Electrical Engineering (there weren't computer systems degrees at the time). Since I really really wanted to know how to build a computer from the ground up (I had assembled one from a kit already in high school and was hungry to know more!) I chose getting my degree in EE with a minor in CS. I don't know where you are in your studies but if you have the opportunity you might find, as I did, that this path scratches that particular itch.

There are a number of books you might track down which you would find interesting given what you now have learned about computer internals. One is "A DEC view on hardware design" which talks about the minicomputers and their architecture that DEC designed. "Introduction to Computer Systems using the PDP-11 and PASCAL" (McGraw-Hill computer science series). And "Digital Computer Design" by Kline. All are out of print but a good computer science section in a library should have them.

One of the reasons I enjoy the older books on computer design is that they assume you don't know why different paths were chosen and so they explain in more detail why one path is better than another. Modern texts often assume you have learned these best practices elsewhere and so treat these design decisions as given knowledge.

If you ever do decide to pick it up again, the two places that you might find rewarding would be automated co-processing (like DMA controllers) and complex mathematical instructions (like floating point).

[+] F4HDK|7 years ago|reply
Thank you very much for this encouraging comment! I ended my studies 15 years ago, I made this project as an "autodidact". I don't know if I will work again on such projects, because I have tons of other electronic topics I want to work on (mainly radio). But if I pick it up again, it will be with a brand new CPU project.
[+] LeonM|7 years ago|reply
Needs a (2017) tag.

Cool project none the less. I build a custom CPU in FPGA as a school project once. Far less complicated than A2Z, iirc I copied an instruction set from a different CPU, so I could use the assembler (and subsequently the C compiler) from that vendor. Can recommend doing such project (VHDL is not that hard to learn), it's an awesome learning experience!

[+] guiomie|7 years ago|reply
Same here, we had implemented the architecture of a CPU in FPGA, I think it was 8bit or 16 bit, not sure. It was great for understanding the ALU, Opcodes ...etc. Probably the most satisfying lab in my computer eng. degree.
[+] wilsonnb3|7 years ago|reply
Why do we bother tagging recent articles with the year? Is there a reason that I need to know this was from 2017 instead of 2016 or 2018?
[+] jwineinger|7 years ago|reply
Cool project. I did something similar for an FPGA class in college. Prof gave us 3 C programs and we had to implement everything to make it work. Difficult project but one of the most rewarding.
[+] bogomipz|7 years ago|reply
Under the hardware section the author states:

>"I have built this development board by myself, using wrapping technique, because I couldn’t find any board with 2MB of SRAM arranged in 16bits. I wanted SRAM, instead of DRAM, for the simplicity."

I am have heard the term "wrapping" or "board wrapping" in historical references by Steve Wozniak and the original Home Brew Computer Club as well. Could someone describe what this "wrapping" process entails? Is this essentially translating the verilog into physical wires and pins?

[+] fasquoika|7 years ago|reply
It's a way of cold welding wires to the pins of electronics[1]. It's pretty much fallen out of usage as computers have gotten too small for the technique, but it's nice for prototyping because it's easier to undo than solder but more permanent than a breadboard

[1]: https://en.wikipedia.org/wiki/Wire_wrap

[+] F4HDK|7 years ago|reply
I still use wire-wrapping with veroboards quite often for my home hobby. I like this method, because above certain frequency, it is fairly difficult (quite impossible) to use Breadboards because of electromagnetic issues. The goal is to test different configurations, to rapidly test a new component, explore new ideas quickly. On Veroboard, the wire-wrapping method enables very dense connections.
[+] srcmap|7 years ago|reply
What's the software required to compile, debug, test the Verilog code for this project or other similar projects?

I used Xlinx VTPro 20, 10 + years ago, like to know the state of FPGA Software tools today.

[+] F4HDK|7 years ago|reply
The source code is compatible with Altera Quartus II. You can also execute the A2Z emulator on your PC, without the Altera suite.
[+] cushychicken|7 years ago|reply
Best part of the FPGA class I took in college was writing a processor from scratch. ALU, program counter, control logic, all in VHDL.

Wish I'd taken the followon course about writing peripherals.

[+] jwineinger|7 years ago|reply
Same. We had/got to implement a keyboard controller and VGA output as well and the grading was based on our system running prof's C programs, taking input, and producing correct output. Lots of late nights, but great fun when it worked.
[+] bradhoffman|7 years ago|reply
Any recommended resources for getting into FPGA development? I've always been interested, but don't know where to start.
[+] megous|7 years ago|reply
I was reading through a gcc source code yesterday and found a moxie architecture, which seems like quite a similar very small project. It is from the autor of libffi and includes gcc, binutils, qemu ports.

It's probably a nice example in how to take this further and implement a gnu toolchain support for something like this.

[+] teabee89|7 years ago|reply
From https://hackaday.io/project/18206-a2z-computer/log/71637-5-a...

I’m afraid, Linux and a C compiler is totally not feasible.

A2Z lacks many things to achieve the goal of C retargeting and Linux porting.

- A2Z only manages Direct Addressing in hardware. No complex address computation. If you want to implement data stack, recursive functions, then direct addressing is not enough. You cannot retarget a C compiler with only direct addressing (or if you emulate complex addressing modes by software, it would be very very slow).

- A2Z has not interrupt management. You cannot implement/port a pre-emptive multitasking OS (Linux) without interrupt management.

- A2Z has no memory management unit.

- A2Z’s ALU is compatible with nothing.

[+] Zardoz84|7 years ago|reply
This make me think about resurrecting my little toy RISC 32bit CPU...