One of my favorite College courses was my digital logic class, where we went from logic gates to building a 4-bit microprocessor. It really gave me a deeper understanding of what computers are and how everything ultimately works together.
I took this class when I was already A dozen years into my software development career. It’s certainly not necessary to write software, but I’m really glad I know the underpinnings of computing.
I had a course like that in my CS programme, and even though we really, really didn't go deep, we wired our own super simple 8-bit processor, and wrote the microcode, and then wrote a program in assembly to do something.
Stepping through the fetch-execute cycle one cycle at a time, watching your CPU do its little thing, and then realizing that that is exactly what actual CPUs do, they're just doing it billions of times per second, that was amazingly eye-opening.
In 1986, after quitting my PhD in computation molecular biology and starting out on a career as a software developer, I tried to figure out what the goal for the next 5 years was. I settled on satisfying myself that I understood (enough about) every level of the computer from semiconductor physics up to GUI/UX work. Luckily I had the semiconductor stuff already covered from my undergrad days and a friend from that time who was an EE major.
The rest took a bit longer than 5 years (maybe 7), and of course the learning didn't stop there. But I do remember thinking in the early 1990s "OK, I understand the entire stack at this point, what comes next?" My understanding had reached a level I was satisfied with - it would not have compared with someone who spent their entire working life focused on that level, but it was far, far beyond any normal computer user's understanding.
In the 2 decades and more since then, not a whole lot has changed that has invalidated the understanding I think I gained at the beginning. Microcode was a bit of a shock. The biggest shock has been watching a whole new generation of programmers who barely grasp the lower 80-90% of the system. Maybe that's good, but it doesn't feel obviously right.
It feels no less wrong than having, at best, a surface-level understanding of how seratonin / dopamine / norepinephrine receptors work and still being comfortable enough to drive a car at highway speeds, knowing full well that an error along this obscure signal processing pipeline is a potentially fatal swerve into oncoming traffic.
A couple of years ago I was in this position. I wanted to know how computers work and stumbled upon this resource. I can highly recommend the exercises given on Github by Xorpd. It really requires no prior knowledge and a lot of the exercises have answers now (still a work in progress though).
Next to that it might be convenient to have something like asmdebugger.com next to it.
Another book that also really helped working with this was "Code: the Hidden Language of Computer Hard and Software" by Charles Petzold. This explains how computers work from light bulb/transistor level, all the way to the top to the OS, the monitor and the keyboard.
I second Charles Petzold's book Code! It was my first technical book and probably the reason I'm going to school for computer science right now. I recommend it to anyone interested in learning how computers work, even non-techies!
I think there already is a subculture of programmers who value the lower resource use (i.e. power as well as memory/speed) and availability of free/open hardware that knowing how computers work affords. Whether that remains a niche or becomes more mainstream is an interesting question.
This reminds me of nand2tetris course, which I think really helps to understand practical implementation of today's computers (https://www.nand2tetris.org)
The associated book ('The Elements of Computing Systems') was eye-opening for me. Not sure if 'today's computers' is quite right -- if I understand correctly, a lot of the simplicity has been optimised out of the hardware stack, to the point that even experts don't have a fully detailed understanding. Still really worthwhile to read and understand the basic principles, though.
I found that one of the most important insights when studying how computers work on a fundamental level, is that deep down data and code don't make much difference. This realization can open the gates of looking at higher programming language concepts in a new light; even if one never delves into bare metal engineering.
You can really take this two ways: one part being about von Neuman architectures, the other side culminating in Lisp. They’re fairly separate but both interesting views on computing. Of course, the opposite viewpoints also do exist, as Harvard architectures and your favorite “blub” language is still used today…
This became apparent to me once I learned a Turing machine can compute anything λ calculus can and vice versa. Is this also also a conclusion you can reach by thinking about computer hardware? Would love to hear more about the thought process.
I'd spent one of the most educational years of my life this last year reading, "Designing for Data-Intensive Applications", "CODE: The Hidden Language of Computer Hardware and Software", and "The Design of the UNIX Operating System".
While they haven't given me any expertise on how to actually use these technologies to solve intricate problems (only experience and true depth can do that), they have given me a near-complete picture of what ordinary computers do today. Nothing is magic. Everything is a logical operation.
I only regret I hadn't read these back when I was a teenager.
PS. Next thing I would like to learn is the general design of modern CPUs and branch prediction.
I do know how computers work, at the transistor level. But it boggles the mind to try to understand how a computer works from the transistor level to the point of where your mouse icon traverses across the screen. Trying to understand it at all levels will really give you a headache. Or at least, it does for me.
I tend to think that if more people understood how to program at this level, the weight of software and websites would be a lot less. I can't log in to my bank website without download 1 MB of crap. It's super easy to program using by importing * but if you can write in assembly you can really optimize. It's just not worth the effort these days. So the improvement of Moore's law (RIP) ends up going to programmers' salaries instead of to performance improvements. Kind of an interesting way to think about it.
Amusingly, in high school my physics teacher told the class a story about how things had become so complex that no single person could understand everything about the classroom computer.
That was in 1978 and the computer was an Apple II.
>So the improvement of Moore's law (RIP) ends up going to programmers' salaries instead of to performance improvements.
This statement doesn't really make sense. If programmers spent the time to optimize everything instead of gluing a bunch of inefficient libraries together, software would cost even more than it does now. So that's even higher salaries because there is even more business competition for the same pool of available programmer hours in the workforce (and that doesn't even take into account the fact that you would wipe out 90% of the programming workforce if writing assembly is required).
Moore's law means that you can afford inefficient code, so it's taking money from programmers and keeping it in the businesses that would have originally had to pay for a hand-tuned piece of code.
My CS systems PhD advisor used to say, 'you waste the abundant resource' in systems design. Lots of CPU cycles available, lots of bandwidth available (except when not!) - so waste it and save programmer time.
At least, in theory.
It's not about knowing assembly or not. It's resources, time, cost, money, ROI.
I think the problem is actually the opposite. There aren't enough abstractions that help programmers write good code. So when a programmer has to do something complex, they end up doing the easy thing which is just importing every library they need. If instead, there was a way to say "only import this when I'm doing this action" and there's no way for the programmer to explicitly say what to import, then that problem almost disappears.
> I tend to think that if more people understood how to program at this level, the weight of software and websites would be a lot less.
I’ve done assembler work, I’m perfectly competent at it. I would never try to build a website in anything low level like that. It’s optimizing for completely the wrong set of problems. The world has lots of network bandwidth, cpu cycles and volatile memory. You won’t find many situations where having a better performing website is worth sacrificing the savings in human labor you get from high level abstractions.
The big reason that everything is so slow is that we just can't standardize on things. For example assembler: There are many to choose from! So we solve this by building slow abstraction layers.
But we can't standardize because technology is improving so quickly. Any standard would be quickly obsolete.
If we ever standardize on one CPU, one screen size, one GPU, etc etc. Then we can make big gains by hyperfocusing on this.
> So the improvement of Moore's law (RIP) ends up going to programmers' salaries
If businesses demanded that everything be of optimal performance, developer salaries would most likely increase because that would increase demand for developer labour in the market without increasing its supply. And I know that many developers would be more than happy to do this (performance optimisation being one of these classic 'nerd sniping' tasks that really appeals to many programmers) but are simply not granted the time to.
And there is a reason that hardly anyone's boss is telling them to eek out every bit and clock cycle: they can't afford it, or at least don't want to pay to do that. So the "benefit" of doing this actually lands squarely on the business, if you want anywhere to look for wasted resources.
As someone who used to work in banking software I can tell you where the 1MB of crap comes from - many banks create their web apps out of ready to use components, which are designed to be completely independent and this brings a huge weight penalty.
Oh spare me. If you went out to write a secure banking website in assembly code you'd come back in 20 years with something even more bloated than we have now. Why? Because the first thing you'd write is a higher-level programming language suited to actually building websites. Then you'd write a web framework on top of that web. Then you'd write a bunch of buggy, insecure crypto code. Et cetera et cetera until you've reinvented a much worse version of the stack we have now.
What if? While I’d love to really know how computers work at this level of depth, I can’t help but feel like it’s a colossal waste of time these days if you already have other sharply tuned high level programming skills.
The ROI isn’t great, the time you spend learning this is time you could have spent going deeper into your chosen craft and becoming even more masterful.
By the time you do learn a dangerous level of assembly and reverse engineering knowledge, you’ll still have to fight for entry level gigs to get in the industry unless you’re really good at finding niche and high paying work... meanwhile people still want to pay you six figures+ for your decade or more of experience writing everyday software at scale.
Alas, where as it once seemed like anything was possible, as I get older I come to realize the best use of my time is to further perfect the things I already know how to do well, so that one day I can truly become a sagacious master, or at the very least not fall behind my peers and become “outdated”.
Learning opportunities like this are inspiring, but ultimately just a passing curiosity. I hope someone young finds it well.
Agree. Yes, it’s extremely interesting. I look back fondly on my youthful hours/days/weeks spent tinkering with assembly.
And yet. To a junior dev who asked me for advice on this stuff today, I’d just say read Code: The Hidden Language of Computer Hardware and Software, and if you like that then work through Nand to Tetris, and then go back to whatever you need for you day job.
What I don’t need to know, and don’t at all, is how web APIs really work, what a kubernetes is, there is something called Elixir I think and it may or may not be related to some Etherium not-money, and have you heard about elastic searches? Because I haven’t.
But I do need to know extremely tight programming for tiny microcontrollers, how many cycles per instruction this or that will take, when pipelines get flushed, device drivers, and datasheets and silicon erratas.
If you deal in higher level stuff, you likely use more ram for a text field than I may have on my entire chip.
Yeah, although I think understanding how computers work at a basic level (the components of a CPU, what instructions are, caches, etc.) has a pretty good ROI because it makes it easy to understand some concepts at a higher level (concurrency for example), understand why some things work a certain way, think about some common performance optimisations, tradeoffs, etc.
I think that's true of most things—you don't need to (and likely cannot) know everything in real depth, but having some knowledge tells you where to look, what to look for, etc.
The CPU / RAM bottleneck isn't getting any faster. Latency is actually increasing with newer versions of DDR.
Most of your CPU time is spent waiting on memory. Compilers are hard-pressed to make you choose efficient data-structures. You just need to know the implementation details to get a responsive user experience.
I've tried to deep dive the modern PC a few times before and I always end up giving up because it quickly becomes a proprietary and undocumented minefield.
This is all totally correct, but why not transition into management and get even more money? Dealing with people is more future-proof than any computer programming
Looks pretty interesting. I took a computer architecture class in Jan-Feb this year, and it was one of my favorite things in a long time. I've written software for ~6 years but I didn't really know much about how CPUs work, what an instruction is, what registers are, etc. so I learned a lot.
I thought a lot of these details could be boring or hard to grasp, but it was the opposite—to me, things like how logic gates worked, how we remember values between computer cycles were all very fascinating, especially compared to the kind of work I've done for a long time.
I've learned a bit more about operating systems and databases since then, and I think knowing a bit of computer architecture helped me understand several topics in much better depth. So even if knowing these things doesn't have a very direct ROI in terms of getting a job, I think it's worth spending the time to learn at least the basics.
Some of you would be ashamed of the choices you make in typed languages...
Others would be horrified to find out what happens in the back end of not typed languages...
You would constantly find it amazing that single CPU core systems aren’t ever actually doing more than one thing at a time and the concurrency of moving the mouse on the screen and ANYTHING else the system is doing is just an illusion...
You wouldn’t shit on C language... at least less.
... IDK, I’m sure there are a ton more I’m not thinking about. I’ll leave it up to someone else to articulate better... something something FaceTime is usually a massive waste of technology.
These are bad musings from an embedded programmer who is dealing with a 2020 released chip and it’s 4KB of ram.
I don't think knowing how computers work stops 4MB of JS on websites, just that the kind of person inclined to learn about the foundations of computers is more likely to not want bloated websites in the first place, and is less likely to work as a web developer.
> You wouldn’t shit on C language... at least less.
Nope. If anything, while programming for microcontrollers I learned C is too remote from hardware and leaves too much implementation-defined. Struct padding? Implementation-defined. Bit field implementation? So implementation-defined you'll have to roll your own with shifts and masks. Accessing anything mapped to memory address? The pointers as defined in C standard aren't what you think, but fortunately the implementation often is. Even standard variable sizes are only 21 years old, which means that there is still legacy code that either doesn't use them or wraps them in layers of typedefs to work with toolchains or libraries that don't. And good luck if you have 24-bit registers.
Of course it's frustratingly low level at the same time. I don't know what could be a good choice, and C is what everyone already uses anyway.
>You wouldn’t shit on C language... at least less.
Anyone who still thinks C is the way computers work is in for a surprise when they realize all modern x86 machines are just emulating x86 in microcode for backward compatibility reasons.
> If you knew how computers worked.... Websites wouldn’t be 4MB on the front page...
Websites are 4MB because they want a certain experience and a certain functionality, and developers assemble various common components and libraries to achieve this with the least amount of custom development.
A developer knowing assembler will not change the priorities of web site owners.
Back when I was in electronics during our Digital circuits section we built all the logic from gates and such (TTL, CMOS) and then at the end we learned 8085 assembly using an "Emac 8085 primer trainer" which we all had to build ourselves. It had no battery backup (optional) and we were not using the serial output at all...so we lost any programs we made on power off. It was very interesting to learn and I feel fortunate that I understand computers on that level.
That being said...I don't really do code at all :) Most of what I do is on the hardware side of things.
I also recommend the book "But How Do It Know? - The Basic Principles of Computers for Everyone." It attempted to explain circuit-based logic gates, how you could use binary trees for all the basic math operations, and then for other characters, building up to more and more complex operations...
I had to stop and sit and think several times while reading that book, but it was a real thrill when I started to understand it, feeling that small flash of near-omniscience...
This looks pretty interesting. I didn't look into the course materials, but I wonder if someone learning "how computers work" would benefit from a casual introduction to a simple microprocessor hardware architecture.
For me, it was the Z80, because there happened to be a book at Radio Shack, published by Howard Sams, that was extremely well written. But since this is about x86 assembly, the old 8086 isn't all that bad in terms of grasping how the different kinds of instructions actually play out in the registers, memory and i/o busses, and so forth. And it's not so bad to learn about the features of modern processors in light of how they compare to early chips. For instance in the 8086, a pointer is not an abstract concept, but a real physical thing!
Despite modern microcontrollers being much more sophisticated than the Z80, that foundation has helped me learn and understand things like how to read the documentation for special function registers and advanced peripheral chips.
The site implies assembly is how computers really work. IMO, assembly is just an incidental implementation detail of little note. High level language chips are possible, after all.
[+] [-] sircastor|5 years ago|reply
I took this class when I was already A dozen years into my software development career. It’s certainly not necessary to write software, but I’m really glad I know the underpinnings of computing.
[+] [-] henrikschroder|5 years ago|reply
Stepping through the fetch-execute cycle one cycle at a time, watching your CPU do its little thing, and then realizing that that is exactly what actual CPUs do, they're just doing it billions of times per second, that was amazingly eye-opening.
[+] [-] ryeights|5 years ago|reply
[+] [-] ibeckermayer|5 years ago|reply
[+] [-] PaulDavisThe1st|5 years ago|reply
The rest took a bit longer than 5 years (maybe 7), and of course the learning didn't stop there. But I do remember thinking in the early 1990s "OK, I understand the entire stack at this point, what comes next?" My understanding had reached a level I was satisfied with - it would not have compared with someone who spent their entire working life focused on that level, but it was far, far beyond any normal computer user's understanding.
In the 2 decades and more since then, not a whole lot has changed that has invalidated the understanding I think I gained at the beginning. Microcode was a bit of a shock. The biggest shock has been watching a whole new generation of programmers who barely grasp the lower 80-90% of the system. Maybe that's good, but it doesn't feel obviously right.
[+] [-] genidoi|5 years ago|reply
[+] [-] dinoqqq|5 years ago|reply
Next to that it might be convenient to have something like asmdebugger.com next to it.
Another book that also really helped working with this was "Code: the Hidden Language of Computer Hard and Software" by Charles Petzold. This explains how computers work from light bulb/transistor level, all the way to the top to the OS, the monitor and the keyboard.
[+] [-] mkatx|5 years ago|reply
[+] [-] georgeoliver|5 years ago|reply
[+] [-] leowoo91|5 years ago|reply
[+] [-] retsibsi|5 years ago|reply
[+] [-] antonios|5 years ago|reply
[+] [-] saagarjha|5 years ago|reply
You can really take this two ways: one part being about von Neuman architectures, the other side culminating in Lisp. They’re fairly separate but both interesting views on computing. Of course, the opposite viewpoints also do exist, as Harvard architectures and your favorite “blub” language is still used today…
[+] [-] tmerr|5 years ago|reply
[+] [-] whytaka|5 years ago|reply
While they haven't given me any expertise on how to actually use these technologies to solve intricate problems (only experience and true depth can do that), they have given me a near-complete picture of what ordinary computers do today. Nothing is magic. Everything is a logical operation.
I only regret I hadn't read these back when I was a teenager.
PS. Next thing I would like to learn is the general design of modern CPUs and branch prediction.
[+] [-] jimmyswimmy|5 years ago|reply
I tend to think that if more people understood how to program at this level, the weight of software and websites would be a lot less. I can't log in to my bank website without download 1 MB of crap. It's super easy to program using by importing * but if you can write in assembly you can really optimize. It's just not worth the effort these days. So the improvement of Moore's law (RIP) ends up going to programmers' salaries instead of to performance improvements. Kind of an interesting way to think about it.
[+] [-] dboreham|5 years ago|reply
That was in 1978 and the computer was an Apple II.
[+] [-] pathseeker|5 years ago|reply
This statement doesn't really make sense. If programmers spent the time to optimize everything instead of gluing a bunch of inefficient libraries together, software would cost even more than it does now. So that's even higher salaries because there is even more business competition for the same pool of available programmer hours in the workforce (and that doesn't even take into account the fact that you would wipe out 90% of the programming workforce if writing assembly is required).
Moore's law means that you can afford inefficient code, so it's taking money from programmers and keeping it in the businesses that would have originally had to pay for a hand-tuned piece of code.
[+] [-] woo49|5 years ago|reply
At least, in theory.
It's not about knowing assembly or not. It's resources, time, cost, money, ROI.
[+] [-] xmprt|5 years ago|reply
[+] [-] AmericanChopper|5 years ago|reply
I’ve done assembler work, I’m perfectly competent at it. I would never try to build a website in anything low level like that. It’s optimizing for completely the wrong set of problems. The world has lots of network bandwidth, cpu cycles and volatile memory. You won’t find many situations where having a better performing website is worth sacrificing the savings in human labor you get from high level abstractions.
[+] [-] im3w1l|5 years ago|reply
But we can't standardize because technology is improving so quickly. Any standard would be quickly obsolete.
If we ever standardize on one CPU, one screen size, one GPU, etc etc. Then we can make big gains by hyperfocusing on this.
[+] [-] captainbland|5 years ago|reply
If businesses demanded that everything be of optimal performance, developer salaries would most likely increase because that would increase demand for developer labour in the market without increasing its supply. And I know that many developers would be more than happy to do this (performance optimisation being one of these classic 'nerd sniping' tasks that really appeals to many programmers) but are simply not granted the time to.
And there is a reason that hardly anyone's boss is telling them to eek out every bit and clock cycle: they can't afford it, or at least don't want to pay to do that. So the "benefit" of doing this actually lands squarely on the business, if you want anywhere to look for wasted resources.
[+] [-] Tade0|5 years ago|reply
[+] [-] feoren|5 years ago|reply
[+] [-] Balooga|5 years ago|reply
https://usborne.com/browse-books/features/computer-and-codin...
Scroll down to the bottom of the page for links to the pdfs.
[+] [-] dt3ft|5 years ago|reply
[+] [-] xwdv|5 years ago|reply
The ROI isn’t great, the time you spend learning this is time you could have spent going deeper into your chosen craft and becoming even more masterful.
By the time you do learn a dangerous level of assembly and reverse engineering knowledge, you’ll still have to fight for entry level gigs to get in the industry unless you’re really good at finding niche and high paying work... meanwhile people still want to pay you six figures+ for your decade or more of experience writing everyday software at scale.
Alas, where as it once seemed like anything was possible, as I get older I come to realize the best use of my time is to further perfect the things I already know how to do well, so that one day I can truly become a sagacious master, or at the very least not fall behind my peers and become “outdated”.
Learning opportunities like this are inspiring, but ultimately just a passing curiosity. I hope someone young finds it well.
[+] [-] libraryofbabel|5 years ago|reply
And yet. To a junior dev who asked me for advice on this stuff today, I’d just say read Code: The Hidden Language of Computer Hardware and Software, and if you like that then work through Nand to Tetris, and then go back to whatever you need for you day job.
[+] [-] SV_BubbleTime|5 years ago|reply
I’m a CE and need to know exactly this stuff.
What I don’t need to know, and don’t at all, is how web APIs really work, what a kubernetes is, there is something called Elixir I think and it may or may not be related to some Etherium not-money, and have you heard about elastic searches? Because I haven’t.
But I do need to know extremely tight programming for tiny microcontrollers, how many cycles per instruction this or that will take, when pipelines get flushed, device drivers, and datasheets and silicon erratas.
If you deal in higher level stuff, you likely use more ram for a text field than I may have on my entire chip.
[+] [-] _____s|5 years ago|reply
I think that's true of most things—you don't need to (and likely cannot) know everything in real depth, but having some knowledge tells you where to look, what to look for, etc.
[+] [-] ganzuul|5 years ago|reply
Most of your CPU time is spent waiting on memory. Compilers are hard-pressed to make you choose efficient data-structures. You just need to know the implementation details to get a responsive user experience.
[+] [-] the_only_law|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] xkcd-sucks|5 years ago|reply
[+] [-] _____s|5 years ago|reply
I thought a lot of these details could be boring or hard to grasp, but it was the opposite—to me, things like how logic gates worked, how we remember values between computer cycles were all very fascinating, especially compared to the kind of work I've done for a long time.
I've learned a bit more about operating systems and databases since then, and I think knowing a bit of computer architecture helped me understand several topics in much better depth. So even if knowing these things doesn't have a very direct ROI in terms of getting a job, I think it's worth spending the time to learn at least the basics.
[+] [-] SonOfLilit|5 years ago|reply
Don't miss his amazing x64 poetry book, at https://www.xorpd.net/pages/xchg_rax/snip_00.html
[+] [-] SV_BubbleTime|5 years ago|reply
Websites wouldn’t be 4MB on the front page...
Some of you would be ashamed of the choices you make in typed languages...
Others would be horrified to find out what happens in the back end of not typed languages...
You would constantly find it amazing that single CPU core systems aren’t ever actually doing more than one thing at a time and the concurrency of moving the mouse on the screen and ANYTHING else the system is doing is just an illusion...
You wouldn’t shit on C language... at least less.
... IDK, I’m sure there are a ton more I’m not thinking about. I’ll leave it up to someone else to articulate better... something something FaceTime is usually a massive waste of technology.
These are bad musings from an embedded programmer who is dealing with a 2020 released chip and it’s 4KB of ram.
[+] [-] giantDinosaur|5 years ago|reply
[+] [-] oriolid|5 years ago|reply
Nope. If anything, while programming for microcontrollers I learned C is too remote from hardware and leaves too much implementation-defined. Struct padding? Implementation-defined. Bit field implementation? So implementation-defined you'll have to roll your own with shifts and masks. Accessing anything mapped to memory address? The pointers as defined in C standard aren't what you think, but fortunately the implementation often is. Even standard variable sizes are only 21 years old, which means that there is still legacy code that either doesn't use them or wraps them in layers of typedefs to work with toolchains or libraries that don't. And good luck if you have 24-bit registers.
Of course it's frustratingly low level at the same time. I don't know what could be a good choice, and C is what everyone already uses anyway.
[+] [-] na85|5 years ago|reply
Anyone who still thinks C is the way computers work is in for a surprise when they realize all modern x86 machines are just emulating x86 in microcode for backward compatibility reasons.
[+] [-] goto11|5 years ago|reply
Websites are 4MB because they want a certain experience and a certain functionality, and developers assemble various common components and libraries to achieve this with the least amount of custom development.
A developer knowing assembler will not change the priorities of web site owners.
[+] [-] gitgud|5 years ago|reply
Any examples?
[+] [-] wmertens|5 years ago|reply
[+] [-] Ccecil|5 years ago|reply
That being said...I don't really do code at all :) Most of what I do is on the hardware side of things.
Edit: Emac...not emacs
[+] [-] MilnerRoute|5 years ago|reply
I had to stop and sit and think several times while reading that book, but it was a real thrill when I started to understand it, feeling that small flash of near-omniscience...
[+] [-] adder46|5 years ago|reply
[+] [-] analog31|5 years ago|reply
For me, it was the Z80, because there happened to be a book at Radio Shack, published by Howard Sams, that was extremely well written. But since this is about x86 assembly, the old 8086 isn't all that bad in terms of grasping how the different kinds of instructions actually play out in the registers, memory and i/o busses, and so forth. And it's not so bad to learn about the features of modern processors in light of how they compare to early chips. For instance in the 8086, a pointer is not an abstract concept, but a real physical thing!
Despite modern microcontrollers being much more sophisticated than the Z80, that foundation has helped me learn and understand things like how to read the documentation for special function registers and advanced peripheral chips.
[+] [-] arthev|5 years ago|reply
[+] [-] fuzzfactor|5 years ago|reply