Given enough power and space efficiency you would start putting multiple cpus together for specialized tasks. Distributed computing could have looked differently
This is more or less what we have now. Even a very pedestrian laptop has 8 cores. If 10 years ago you wanted to develop software for today’s laptop, you’d get a 32-gigabyte 8-core machine with a high-end GPU. And a very fast RAID system to get close to an NVMe drive.
Computers have been “fast enough” for a very long time now. I recently retired a Mac not because it was too slow but because the OS is no longer getting security patches. While their CPUs haven’t gotten twice as fast for single-threaded code every couple years, cores have become more numerous and extracting performance requires writing code that distributes functionality well across increasingly larger core pools.
Commodore 64 and Ataris had intelligent peripherals. Commodore’s drive knew about the filesystem and could stream the contents of a file to the computer without the computer ever becoming aware of where the files were on the disk. They also could copy data from one disk to another without the computer being involved.
Mainframes are also like that - while a PDP-11 would be interrupted every time a user at a terminal pressed a key, IBM systems offloaded that to the terminals, that kept one or more screens in memory, and sent the data to another computer, a terminal controller, that would, then, and only then, disturb the all important mainframe with the mundane needs or its users.
This is what the Mac effectively does now - background tasks run on low-power cores, keeping the fast ones free for the interactive tasks. More specialised ARM processors have 3 or more tiers, and often have cores with different ISAs (32 and 64 bit ones). Current PC architectures are already very distributed - your GPU, NIC/DPU, and NVMe SSD all run their own OSs internally, and most of the time don’t expose any programmability to the main OS. You could, for instance, offload filesystem logic or compression to the NVMe controller, freeing the main CPU from having to run it. Same could be done for a NIC - it could manage remote filesystem mounts and only expose a high-level file interface to the OS.
The downside would be we’d have to think about binary compatibility between different platforms from different vendors. Anyway, it’d be really interesting to see what we could do.
rbanffy|15 days ago
Computers have been “fast enough” for a very long time now. I recently retired a Mac not because it was too slow but because the OS is no longer getting security patches. While their CPUs haven’t gotten twice as fast for single-threaded code every couple years, cores have become more numerous and extracting performance requires writing code that distributes functionality well across increasingly larger core pools.
LeFantome|14 days ago
b112|15 days ago
rbanffy|15 days ago
Mainframes are also like that - while a PDP-11 would be interrupted every time a user at a terminal pressed a key, IBM systems offloaded that to the terminals, that kept one or more screens in memory, and sent the data to another computer, a terminal controller, that would, then, and only then, disturb the all important mainframe with the mundane needs or its users.
rbanffy|15 days ago
The downside would be we’d have to think about binary compatibility between different platforms from different vendors. Anyway, it’d be really interesting to see what we could do.