IMHO older computers are more interesting because you can have much better relative understanding of what they consist of, how do they work and and much higher relative degree of control over them. They induce senses senses of curiosity, flexibility and security this way.
Compared to the computers from the previous century which could be built, modified, repaired and operated consciously, modern ones are more of disposable magic-button black boxes which will turn into pumpkins as soon as the vendor servers turn off and go extinct like ancient magical creatures as soon as the factory in Taiwan shuts down because nobody has a serious idea about all their internals and how to produce them anymore.
Sadly, there already is a generation of programmers who aren't even interested in being able to assemble their own PC from a set of boards, let alone understanding anything about things like registers and the physics behind them.
I love watching retro computing channels on YouTube, and many of them does such a great job of explaining CPUs like 6502, 6510, Z80 or Motorola 68000. Especially the 6502 is interesting, not because it's a marvel of technology, but because you can learn how it work in a reasonable time frame. I took university courses in computer architecture, but I can explain how a modern PowerPC or AMD64 CPU work. The 6510 is something I can easily follow when some guy on YouTube explains how a C64 works and how he goes about debugging and repairing a broken system.
I think it's all state-of-the-art vs grassroots-tech-tree
Meaning - the state of the art stuff can't be explored by normal people, but over time the knowledge and tools trickle down and it becomes democratized.
Older cars were accessible to shadetree mechanics, who could work on most stuff. When they became electronic, it cut some people off... until things like canbus became known, and later tools became generally available to folks with a laptop.
Only twist nowadays is manufacturers are actively preventing some things through cryptographic signatures and the like. Hopefully, eventually, this will get tools to allow people to mess with things.
Old computer systems are fascinating from many points of view: The article discusses the software used to wring the last drop of performance out of hardware that we would probably now consider to be inadequate for running a disk controller. It's super cool that many of these systems are now available as emulators! I was a little surprised that the author only got twice the performance of the DtCyber in an emulator until I saw that they were emulating on an 800MHz P3 :)
In addition, visiting these old devices IRL is pretty instructive: one thing that always strikes me when I visit the Computer Sheds in Yorkshire (http://www.computermuseum.org.uk/ - an absolute gem BTW) is the enduring problem of thermal management. Up close, those old mainframes are basically huge air conditioning systems with some chips attached. My understanding is that managing heat is >still< the ultimate factor limiting performance (otherwise we could just stack up silicon in 3D)...
> My understanding is that managing heat is >still< the ultimate factor limiting performance
It seems fitting that I'm hearing the fans spin up to full blast on my ThinkPad X1 Extreme laptop supercomputer as I compile some code in an Ubuntu VM while starting up a CPU-hungry Point Of Sale system in a Windows VM, and reading your comment on the Windows host OS.
It is nice that I can step away from the computer and just listen for the fans to quiet down again to tell when everything is ready to go.
Most things are interesting simply due to the fact that they were produced by and through the constraints of their time. Seeing how people solved these contextual problems very often is fascinating. And then there's the spirit/aesthetics/goals of the era.
What has always fascinated me about old hardware/software is the "what if" aspect.
Up until mid-90s, there was a real diversity of ideas and architectures in computing. 68K, PPC, x86, RISC, ARM, MIPS, i860, and I'm forgetting a lot of others. There were a lot of interesting ideas, some of which did become standard, while others died either for technical reasons (lack of adequate 3D capability on the Amiga) or because of business deals that didn't pan out (such as BeOS on Macintosh being dumped in favor of buying out NeXT).
Of course x86 seems standard and humdrum today, but it wasn't necessarily meant to be that way.
We could have been running BeOS on i860-based hardware, with MiniDiscs for removable storage, if things had panned out differently. And I find that fascinating to consider, when examining old computers.
I wonder why that never amounted to anything. If I recall a few drives was made, but the cost just meant they where out of reach. The disc was just so cool, like take out of a sci-fi.
My favorite old computer is the Mark I perceptron, which was a neural network hardware computer (single layer) with light-sensitive visual receptors, weights controlled by self-turning potentiometers, and maually wired full connectivity: https://en.wikipedia.org/wiki/File:Mark_I_perceptron.jpeg
The university I went to had used Xerox Sigma 6/7/9s. One of them is now in the living computer museum. Yeah! After programming 6502 assembler in high school, having multiple blocks of 16 32-bit registers was breath-taking. My brain was wired to the 6502 X/Y/A thing so I doubt I ever used more than 5-6 of them at a time. After doing a lot of 8-bit micro and X86 assembly, doing ARM assembly for the first time felt similarly freeing.
These are the lineage of machines Knuth implemented his famous ALGOL-60 compiler on. There's a nice video about the B6500 that was discussed on HN years ago: https://news.ycombinator.com/item?id=7880027
I did some work adjacent to a UNISYS ClearPath "A Series" machine and later a PC-based version back in the early 2000s. Watching the operators and field service technicians working on it I got a feeling of a very mature environment. (Not necessarily fun to use by the look of it.) I got a bit of an IBM AS/400 feeling watching users interact with it in terminal emulation. For somebody with a Unix and MS-DOS background it just seemed odd.
After that contract was up I mostly forgot about it. I heard a talk where Alan Kay referred to some old Burroughs system. I got to reading about Burroughs and learned UNISYS was a successor. That led down a rabbit hole that ended up at these B5000-descended machines.
It looks like it was a well thought-out architecture that persists today (albeit in software, rather than hardware). The operating system, MCP, is still under development.
> The operating system, MCP, is still under development.
MCP is, by far, the most user hostile OS I've ever seen. It's no surprise Bonnie McBird, who wrote Tron's screenplay and is Kay's wife, used it as the name of the villain.
> In fact, the operating systems that run on "old computers" are surprisingly sophisticated and complex.
Are they? Or are we just slowly forgetting that everything in modern software is composed of concepts invented pre-1980 (and in fact pre-1970 for the most part)?
It's called "Turing's Curse" [0]
To clarify, it's the "surprisingly" part that is not true. Certainly the software is sophisticated. What is surprising is how often we think our modern software is "sophisticated".
EDIT: but, I guess we have done some modern cool stuff with GPUs, consensus algorithms including PoW, and a few cool ML model architectures and training techniques.
I think there was a certian degree of bravery (or foolishness really) that authors of older software had WRT their willingness to couple with certain things (poking registers/DMA with specialized hardware, monkey patching OS components etc.)
We have a lot more layers of abstraction, in the name of productivity. That's certainly more complex, and from a certain point of view, more sophisticated.
Meanwhile (I can't be the only one who's noticed this?) it seems like it takes a team 5x as large 2x as long to write a program that does the same thing as some 80s or 90s equivalent.
What I like about old computers: creative solutions. These machines were severely limited by today standards, so to do anything useful, interesting workarounds had to be developed.
In the home-computer field, for example, using off-the-shelf cassette tape as storage, compiling code in multiple passes with persistence, simpler byte-code interpreting to increase code density, memory bank switching and other techniques are part of the charm of using that hardware.
Also, it is usually impressive to watch anything useful and quickly being done by constrained hardware.
> But the PP concept and the offloading of the OS from the main CPU is still a very interesting idea - and one which is unlikely to be explored in foreseeable future machines.
Interestingly, a lot of the system services on macOS run on the power efficient but slow cores on the M1, keeping the beefy cores to user applications. I wonder if Linux can have an attribute for executables that indicate a preference of which kind or processor it should be preferably scheduled to.
One thing I always notice on the CDC 6x00 family is that the vectors on screen are drawn so fast (to fit enough text on the high persistence phosphor before needing to redraw) that analog artifacts appear on the fonts themselves, giving them a whimsical, almost Comic Sans, look.
Funny to see the analog side isn't able to keep up with the digital hardware. That's peak Seymour Cray.
The college I went to had air cooled tape drives. For some reason one unit would get too hot and stop working.
Solution: Go behind it and flap the large access door a couple of times, and then it would start up and finish a compile. As you were waiting for the printouts it would keep cooling down and do the next job if it was not too long.
“With the exception of the Intel Itanium family, all of the architectural features that contribute to the performance of today's microprocessors first appeared (and were pretty fully explored) in a series of "mainframe" computers designed between the late 1950's and 1975.”
Here are just a few innovations hugely important to performance that came later: general out of order execution with precise exceptions, shared memory multiprocessor, memory disambiguation prediction, memory renaming.
Some of the innovations introduced during the described era were by no means “fully explored” either. For example, branch prediction advanced rapidly through the 80s and 90s, and the best branch prediction algorithm known today (TAGE) was developed in 2000s.
And, of course, architectural innovation continues to this day.
370 assembly language was interesting in so many ways. Applications were responsible for maintaining their own call stack and there were processor-level instructions for converting a number encoded in EBCDIC and converting it to a machine level int.
This is only just slightly related, but I am working on a project based on Algorand, and one of the interesting things about it is that the smart contracts are written in something somewhat like an assembly language. It actually is deliberately not Turing complete because it does not allow looping or recursion, but has a few interesting features and constraints.
For example, it has built-in cryptography instructions.
The need to execute as many "programs" as possible to have a high transaction rate has pushed the design towards these intriguing constraints.
So, I guess video and network cards are the two most common remaining components that could be called "peripheral processors"?
My interpretation of the ( frustratingly sparse ) documentation for the AWS Nitro hardware support seems to imply that it's pushed a bit more of the processing back out to the peripherals - which seems to buck the trend, but also seems highly effective.
For example, so far as I can tell, AWS's NVME drives have a lower latency from within a Nitro EC2 vm than a 905P Optane accessed from a KVM VM with PCIE passthrough on my play machine at home - although it's possible I've done something wrong with the setup!
What is "interesting" obviously depends on the person. I always liked playing with old *nix workstations from the 90's. Something about tinkering on a box that would have cost over $50,000 new is interesting to me.
There were 4 of the then state-of-the-art, fastest in the world, CDC-7600 and CDC-6600 computers installed at Lawrence Livermore Lab (LLL, now LLNL, Lawrence Livermore National Lab) when I worked there during the summer of 1972.
[+] [-] qwerty456127|4 years ago|reply
Compared to the computers from the previous century which could be built, modified, repaired and operated consciously, modern ones are more of disposable magic-button black boxes which will turn into pumpkins as soon as the vendor servers turn off and go extinct like ancient magical creatures as soon as the factory in Taiwan shuts down because nobody has a serious idea about all their internals and how to produce them anymore.
Sadly, there already is a generation of programmers who aren't even interested in being able to assemble their own PC from a set of boards, let alone understanding anything about things like registers and the physics behind them.
UPDATE: yes, I would say the same about cars.
[+] [-] mrweasel|4 years ago|reply
[+] [-] jhvkjhk|4 years ago|reply
When RISC-V and open source FPGA toolchain become more popular, I guess more and more programmers will come back to play with hardware.
[+] [-] m463|4 years ago|reply
Meaning - the state of the art stuff can't be explored by normal people, but over time the knowledge and tools trickle down and it becomes democratized.
Older cars were accessible to shadetree mechanics, who could work on most stuff. When they became electronic, it cut some people off... until things like canbus became known, and later tools became generally available to folks with a laptop.
Only twist nowadays is manufacturers are actively preventing some things through cryptographic signatures and the like. Hopefully, eventually, this will get tools to allow people to mess with things.
[+] [-] canadianfella|4 years ago|reply
[deleted]
[+] [-] limbicsystem|4 years ago|reply
In addition, visiting these old devices IRL is pretty instructive: one thing that always strikes me when I visit the Computer Sheds in Yorkshire (http://www.computermuseum.org.uk/ - an absolute gem BTW) is the enduring problem of thermal management. Up close, those old mainframes are basically huge air conditioning systems with some chips attached. My understanding is that managing heat is >still< the ultimate factor limiting performance (otherwise we could just stack up silicon in 3D)...
[+] [-] Stratoscope|4 years ago|reply
It seems fitting that I'm hearing the fans spin up to full blast on my ThinkPad X1 Extreme laptop supercomputer as I compile some code in an Ubuntu VM while starting up a CPU-hungry Point Of Sale system in a Windows VM, and reading your comment on the Windows host OS.
It is nice that I can step away from the computer and just listen for the fans to quiet down again to tell when everything is ready to go.
[+] [-] BuildTheRobots|4 years ago|reply
Wasn't the disk controller chip in the BBC Micro significantly more powerful than the main CPU? I think some games used it as a coprocessor.
[+] [-] agumonkey|4 years ago|reply
[+] [-] nomoreusernames|4 years ago|reply
[+] [-] KozmoNau7|4 years ago|reply
Up until mid-90s, there was a real diversity of ideas and architectures in computing. 68K, PPC, x86, RISC, ARM, MIPS, i860, and I'm forgetting a lot of others. There were a lot of interesting ideas, some of which did become standard, while others died either for technical reasons (lack of adequate 3D capability on the Amiga) or because of business deals that didn't pan out (such as BeOS on Macintosh being dumped in favor of buying out NeXT).
Of course x86 seems standard and humdrum today, but it wasn't necessarily meant to be that way.
We could have been running BeOS on i860-based hardware, with MiniDiscs for removable storage, if things had panned out differently. And I find that fascinating to consider, when examining old computers.
[+] [-] bombcar|4 years ago|reply
1. 8-bit bytes (7,6,9 were common) 2. Byte-level addressing (word addressing was common) 3. One's complement
[+] [-] mrweasel|4 years ago|reply
I wonder why that never amounted to anything. If I recall a few drives was made, but the cost just meant they where out of reach. The disc was just so cool, like take out of a sci-fi.
[+] [-] dekhn|4 years ago|reply
[+] [-] fortyrod|4 years ago|reply
[+] [-] EvanAnderson|4 years ago|reply
These are the lineage of machines Knuth implemented his famous ALGOL-60 compiler on. There's a nice video about the B6500 that was discussed on HN years ago: https://news.ycombinator.com/item?id=7880027
I did some work adjacent to a UNISYS ClearPath "A Series" machine and later a PC-based version back in the early 2000s. Watching the operators and field service technicians working on it I got a feeling of a very mature environment. (Not necessarily fun to use by the look of it.) I got a bit of an IBM AS/400 feeling watching users interact with it in terminal emulation. For somebody with a Unix and MS-DOS background it just seemed odd.
After that contract was up I mostly forgot about it. I heard a talk where Alan Kay referred to some old Burroughs system. I got to reading about Burroughs and learned UNISYS was a successor. That led down a rabbit hole that ended up at these B5000-descended machines.
It looks like it was a well thought-out architecture that persists today (albeit in software, rather than hardware). The operating system, MCP, is still under development.
[+] [-] rbanffy|4 years ago|reply
MCP is, by far, the most user hostile OS I've ever seen. It's no surprise Bonnie McBird, who wrote Tron's screenplay and is Kay's wife, used it as the name of the villain.
[+] [-] nix23|4 years ago|reply
UpToDate MVS distribution: http://wotho.ethz.ch/tk4-/
UpToDate vm370 distribution: http://vm370.org/VM/V1R1.1
390 Emulator: https://github.com/SDL-Hercules-390/hyperion
Best 3270-Terminal for windows: http://www.tombrennansoftware.com/download.html
3270-Terminal for *nix or Win: http://x3270.bgp.nu/
[+] [-] rbanffy|4 years ago|reply
I wonder if my 3270 font would work with Vista tn3270... It doesn't with x3270 :-(
[+] [-] StandardFuture|4 years ago|reply
Are they? Or are we just slowly forgetting that everything in modern software is composed of concepts invented pre-1980 (and in fact pre-1970 for the most part)?
It's called "Turing's Curse" [0]
To clarify, it's the "surprisingly" part that is not true. Certainly the software is sophisticated. What is surprising is how often we think our modern software is "sophisticated".
[0] https://www.youtube.com/watch?v=hVZxkFAIziA
EDIT: but, I guess we have done some modern cool stuff with GPUs, consensus algorithms including PoW, and a few cool ML model architectures and training techniques.
[+] [-] swiley|4 years ago|reply
[+] [-] handrous|4 years ago|reply
Meanwhile (I can't be the only one who's noticed this?) it seems like it takes a team 5x as large 2x as long to write a program that does the same thing as some 80s or 90s equivalent.
I'm not sure what to make of that.
[+] [-] marcodiego|4 years ago|reply
In the home-computer field, for example, using off-the-shelf cassette tape as storage, compiling code in multiple passes with persistence, simpler byte-code interpreting to increase code density, memory bank switching and other techniques are part of the charm of using that hardware.
Also, it is usually impressive to watch anything useful and quickly being done by constrained hardware.
[+] [-] rbanffy|4 years ago|reply
Interestingly, a lot of the system services on macOS run on the power efficient but slow cores on the M1, keeping the beefy cores to user applications. I wonder if Linux can have an attribute for executables that indicate a preference of which kind or processor it should be preferably scheduled to.
[+] [-] yjftsjthsd-h|4 years ago|reply
[+] [-] rbanffy|4 years ago|reply
Funny to see the analog side isn't able to keep up with the digital hardware. That's peak Seymour Cray.
[+] [-] Taniwha|4 years ago|reply
http://www.chilton-computing.org.uk/acl/literature/reports/p...
[+] [-] ecpottinger|4 years ago|reply
Solution: Go behind it and flap the large access door a couple of times, and then it would start up and finish a compile. As you were waiting for the printouts it would keep cooling down and do the next job if it was not too long.
[+] [-] hulitu|4 years ago|reply
[+] [-] squarefoot|4 years ago|reply
[+] [-] cpu_architect|4 years ago|reply
“With the exception of the Intel Itanium family, all of the architectural features that contribute to the performance of today's microprocessors first appeared (and were pretty fully explored) in a series of "mainframe" computers designed between the late 1950's and 1975.”
Here are just a few innovations hugely important to performance that came later: general out of order execution with precise exceptions, shared memory multiprocessor, memory disambiguation prediction, memory renaming.
Some of the innovations introduced during the described era were by no means “fully explored” either. For example, branch prediction advanced rapidly through the 80s and 90s, and the best branch prediction algorithm known today (TAGE) was developed in 2000s.
And, of course, architectural innovation continues to this day.
[+] [-] dhosek|4 years ago|reply
[+] [-] ilaksh|4 years ago|reply
For example, it has built-in cryptography instructions.
The need to execute as many "programs" as possible to have a high transaction rate has pushed the design towards these intriguing constraints.
[+] [-] TristanBall|4 years ago|reply
My interpretation of the ( frustratingly sparse ) documentation for the AWS Nitro hardware support seems to imply that it's pushed a bit more of the processing back out to the peripherals - which seems to buck the trend, but also seems highly effective.
For example, so far as I can tell, AWS's NVME drives have a lower latency from within a Nitro EC2 vm than a 905P Optane accessed from a KVM VM with PCIE passthrough on my play machine at home - although it's possible I've done something wrong with the setup!
[+] [-] chaoticmass|4 years ago|reply
[+] [-] gnufx|4 years ago|reply
[+] [-] bigtimber|4 years ago|reply