I teach the introduction to computing class at MSU and agree entirely: most students need to start with the absolutely most simple introduction to computing possible.
An extremely simple non-pipelined 8 bit CPU. The emulator lets you step through tick by tick and see how the machine code is driving an operation. I spend one lecture showing each tick of a bitwise AND and following the data around from the instruction into the instruction register, how the instruction selects the general purpose registers, runs it through the ALU and then moves the data back from the accumulator into a register. It's one of my favorite lectures of the year.
A higher level Von Neumann style computer that helps introduce students gently to assembly where they can fully understand the "machine code" since it's just decimal. We then build an emulator, assembler and compiler for an extension to LMC that introduces the notion of a stack to support function calls.
It's a fun one semester class, not as intense as NAND-to-Tetris but still an overview of how computing works.
My first introduction to this stuff was also the little man computer. I won't say when as it might make some readers feel old, but very fond memories of playing around with it.
Similarly fond memories of the teacher letting me just do my own thing at the back of the classroom after noticing I was writing an interpreter rather than hard-coding all the logic in the task.
Probably hard to state just how good of an introduction he was to programming, actually. Not a codeforces-style geek but was constantly drilling the value of writing beautiful code that is as simple as possible to a bunch of 14 year olds of which probably only me and a friend were listening. Maybe it's not coincidence that amongst my friends/mutuals from that age there are core maintainers for I think three for pretty big languages
I made a Little Man Computer simulator this year that I use to teach students, but it uses binary and 2's complement which helps them learn that too: https://www.mathsuniverse.com/lmc
how awesome that this exists. I was learning how CPU works and designing my own CPU w emulator like 20 years ago as a teenager by googling into obscure forums, blog posts and homemade cpu webring. I made an experiment not long ago "would I be able to find in google by myself all learning materials to do that again". The outcome of that experiment deeply unsettled me. Google just gives you shit and total garbage. Half of the results are AI generated, other half is sloppily written half assed abstract pseudo tutorial like nonsense on medium or other paid-for-engagement platform. My children would not be able to reproduce such self learning without watching some youtubers doing it or by accessing some curated paid course or by accidental stumbling upon "gems" like this i.e. from HN. We desperately need back old google and old internet and somehow save and preserve humanitys knowledge.
I am glad you followed up on this, to see if you could do it again! That matches my experience.
I remember feeling like the big tech corps had turned "consumer" into a pejorative and started relentlessly abusing their customers circa 2016 or so... Especially microsoft, post Windows 8. Consumer devices don't need to work. That's for pro devices. Consumer devices just need to sell ads, soak up user time, and let businesses market their goods for consumption!
The majority of search results from late 2019 or so and onwards have only degraded. Even on other platforms, like YouTube -- you get 4-5 real results, and the rest are "suggested for you", even if you've logged out. Google and Youtube both feel like "consumer" search engines, where advertising and eyeball time trump usefulness and user authority (i.e. the user being able to ask for what they want, and get it).
I agree. Its hard though, SEO people are malicious, persistent and with modern tech, have incredible tools.
And with hand curation, its hard to feel like its 'worth it' when instead of being able to build a community, your results are scraped and shown out of context.
If you have any thoughts on how to get that sort of culture back I'm open to it
Part of me thinks we need a new protocol, and a new lightweight web built around markdown with absolutely no (client side) active content allowed.
What I'm not sure about is how to combat bad actors / spammers / low-effort pages and AI slop. I'm leaning towards some kind of git-like storage with history as a mandatory part of the protocol, and some kind of cryptographic web-of-trust endorsement system.
For those who haven't seen it before, throwing out nand2tetris and nandgame as another interesting journey from logic gates -> computer programs running on a CPU.
nandgame, in particular, is really easy to get started with and has been updated quite a lot over the year. If you looked at it a while ago check for new updates!
nand2tetris goes into a bit more detail around some things, I like it too but harder to get started with.
Both are written in Clash, which is a subset of Haskell that compiles to target FPGAs. It's an incredibly OP HDL.
I don't think I ever ran the second one on an actual FPGA, because at the time values of type `Char` wouldn't synthesize, but I think the Clash compiler fixed that at some point.
As much as I really benefited from being able to internalize system architectures like these many times over… I do wish now, as someone who ended up in software, that there were similar hand-holdy third guides to implementing the “core” of out-of-order superscalar execution engines, too. They’re crucial to understanding how modern processors _kinda actually work to a zeroth order approximation_, even though it’s impossible to convey the engineering scope of modern CPUs to those who need hand-holding, but I
At Georgia Tech I had one class (CS 2110) that dealt with implementing a simple in-order non-pipelined processor, one class that dealt with implementing a pipelined processor (CS 2210), then two classes (CS 4290 and CS 3220 IIRC) that dealt with implementing an out-of-order processor (4290 was more theory and also covered caches; 3220 was entirely implementing it on an FPGA). So, that sort of thing does exist, but IDK if most universities will let you take single classes like that
I considered trying to do a simple CPU design from logic gates too. But I ended up wondering about some of the performance characteristics. Maybe some people who are knowledgeable are reading this. What I am wondering about is the switching speed of logic gates as compared to the signal speed in the electric connections for a realistic CPU. I.e., how many logic gate lengths (assume logic gates to be square) does an electric signal travel in an electric connection in the time that is needed for a logic gate to invert its output. Another one that seems relevant is how much spacing electric connections need compared to the size of a logic gate.
"The MOnSter 6502 runs at about 1/20th the speed of the original, thanks to the much larger capacitance of the design. The maximum reliable clock rate is around 50 kHz. The primary limit to the clock speed is the gate capacitance of the MOSFETs that we are using, which is much larger than the capacitance of the MOSFETs on an original 6502 die."
I'm just here to point out that your question is nonsensical. You're asking for propagation delay of the wires vs switching time of the gates, but you don't want the propagation delay to be measured by the time it takes for a signal to be propagated or the switching time of the not gate, so you are instead asking the total propagation delay to be normalized by the switching time of a not gate and somehow calculate that back into a wire length equivalent normalized by the longest dimension of a not gate.
None of these things make much sense.
Also as an approximate answer to your question: The simplest circuit without a flip flop aka an oscillator can easily reach hundreds of GHz using silicon germanium based semiconductor manufacturing
The performance characteristics won't be great, but it would still be a fun project. In fact one of the early 32-bit RISC processors, PA-RISC, was originally shipped as a set of four large boards covered in discrete 74-series logic. Which actually includes a large ALU chip (without which the design would probably span 8 boards instead...)
I'm pondering about building CPU from logic gate chips. The thing is most projects like that use chip count efficient (usually microcoded) designs which aren't fast enough to run "real" familiar software. I want 32-bit instruction set with virtual memory, capable of running Linux and fast enough to run games like Doom. Fastest reasonably available logic gate family is 74AUC (possibly with exception of exotic ECL gates). Combined with copious use of fast asynchronous SRAM chips, I think performance in the ballpark of 20 MIPS should be attainable. I'm half-expecting that I'm making some huge error in my assumptions and it will turn out these numbers are impossible, but that didn't happen yet.
I started this journey a while back using Tanenbaum's MIC-1 during my Uni days with a another colleague. Still have it online if anyone is interested: https://github.com/elvircrn/mic-1.
[+] [-] recursivedoubts|1 year ago|reply
My favorite two models are:
The Scott CPU
https://www.youtube.com/watch?v=cNN_tTXABUA (great book, website is now offline unfortunately: https://web.archive.org/web/20240430093449/https://www.butho...)
An extremely simple non-pipelined 8 bit CPU. The emulator lets you step through tick by tick and see how the machine code is driving an operation. I spend one lecture showing each tick of a bitwise AND and following the data around from the instruction into the instruction register, how the instruction selects the general purpose registers, runs it through the ALU and then moves the data back from the accumulator into a register. It's one of my favorite lectures of the year.
The Little Man Computer - https://www.101computing.net/LMC/
A higher level Von Neumann style computer that helps introduce students gently to assembly where they can fully understand the "machine code" since it's just decimal. We then build an emulator, assembler and compiler for an extension to LMC that introduces the notion of a stack to support function calls.
It's a fun one semester class, not as intense as NAND-to-Tetris but still an overview of how computing works.
[+] [-] cowsaymoo|1 year ago|reply
https://www.cs.hmc.edu/~cs5grad/cs5/hmmm/documentation/docum...
[+] [-] mhh__|1 year ago|reply
Similarly fond memories of the teacher letting me just do my own thing at the back of the classroom after noticing I was writing an interpreter rather than hard-coding all the logic in the task.
Probably hard to state just how good of an introduction he was to programming, actually. Not a codeforces-style geek but was constantly drilling the value of writing beautiful code that is as simple as possible to a bunch of 14 year olds of which probably only me and a friend were listening. Maybe it's not coincidence that amongst my friends/mutuals from that age there are core maintainers for I think three for pretty big languages
[+] [-] jakesomething|1 year ago|reply
[+] [-] redundantly|1 year ago|reply
[+] [-] hnuser123456|1 year ago|reply
[+] [-] artemonster|1 year ago|reply
[+] [-] LeftHandPath|1 year ago|reply
I remember feeling like the big tech corps had turned "consumer" into a pejorative and started relentlessly abusing their customers circa 2016 or so... Especially microsoft, post Windows 8. Consumer devices don't need to work. That's for pro devices. Consumer devices just need to sell ads, soak up user time, and let businesses market their goods for consumption!
The majority of search results from late 2019 or so and onwards have only degraded. Even on other platforms, like YouTube -- you get 4-5 real results, and the rest are "suggested for you", even if you've logged out. Google and Youtube both feel like "consumer" search engines, where advertising and eyeball time trump usefulness and user authority (i.e. the user being able to ask for what they want, and get it).
[+] [-] spencerflem|1 year ago|reply
And with hand curation, its hard to feel like its 'worth it' when instead of being able to build a community, your results are scraped and shown out of context.
If you have any thoughts on how to get that sort of culture back I'm open to it
[+] [-] robinsonb5|1 year ago|reply
What I'm not sure about is how to combat bad actors / spammers / low-effort pages and AI slop. I'm leaning towards some kind of git-like storage with history as a mandatory part of the protocol, and some kind of cryptographic web-of-trust endorsement system.
[+] [-] Cogito|1 year ago|reply
nandgame, in particular, is really easy to get started with and has been updated quite a lot over the year. If you looked at it a while ago check for new updates!
nand2tetris goes into a bit more detail around some things, I like it too but harder to get started with.
https://www.nand2tetris.org/
https://www.nandgame.com/
[+] [-] adornKey|1 year ago|reply
[+] [-] wyager|1 year ago|reply
This one is a 4-stage pipelined CPU: https://github.com/wyager/Lambda16
This one is a superscalar out-of-order CPU: https://github.com/wyager/Lambda17
Both are written in Clash, which is a subset of Haskell that compiles to target FPGAs. It's an incredibly OP HDL.
I don't think I ever ran the second one on an actual FPGA, because at the time values of type `Char` wouldn't synthesize, but I think the Clash compiler fixed that at some point.
[+] [-] helij|1 year ago|reply
[+] [-] JLCarveth|1 year ago|reply
[+] [-] weakfish|1 year ago|reply
[+] [-] nxobject|1 year ago|reply
[+] [-] ranger207|1 year ago|reply
[+] [-] cjfd|1 year ago|reply
[+] [-] ofrzeta|1 year ago|reply
"The MOnSter 6502 runs at about 1/20th the speed of the original, thanks to the much larger capacitance of the design. The maximum reliable clock rate is around 50 kHz. The primary limit to the clock speed is the gate capacitance of the MOSFETs that we are using, which is much larger than the capacitance of the MOSFETs on an original 6502 die."
[+] [-] timthorn|1 year ago|reply
[+] [-] imtringued|1 year ago|reply
None of these things make much sense.
Also as an approximate answer to your question: The simplest circuit without a flip flop aka an oscillator can easily reach hundreds of GHz using silicon germanium based semiconductor manufacturing
[+] [-] phendrenad2|1 year ago|reply
[+] [-] avmich|1 year ago|reply
[+] [-] jecel|1 year ago|reply
https://github.com/jeceljr/digitalCPUzoo/tree/main/MCPU
[+] [-] robinsonb5|1 year ago|reply
I'm not active there any more, but I used to be when I was developing my own toy CPU: https://github.com/robinsonb5/EightThirtyTwo
[+] [-] garaetjjte|1 year ago|reply
[+] [-] bogantech|1 year ago|reply
I imagine the physical factors like capacitance, induction, impedance etc will be the limiting factor
[+] [-] nayuki|1 year ago|reply
[+] [-] elvircrn|1 year ago|reply
[+] [-] uticus|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] 752963e64|1 year ago|reply
[deleted]