to deploy a 2nd hand Cray-1 at UQ, we had to raise the ex-IBM 3033 floor, it turned out the bend radius for flourinert was NOT the same as a water cooled machine. We also installed a voltage re-generator which is basically a huge spinning mass, you convert Australian volts to DC, spin the machine, and take off re-generated high frequency volts for the cray, as well as 110v on the right hz for boring stuff alongside. the main bit ran off something like 400hz power, for some reason the CPU needed faster mains volts going in.
The flourinert tank has a ball valve, like a toilet cistern. we hung a plastic lobster in ours, because we called the cray "Yabbie" (Queensland freshwater crayfish)
That re-generator, the circuit breakers are .. touchy. the installation engineer nearly wet his trousers flipping on, the spark-bang was immense. Brown trouser moment.
The front end access was Unisys X11 Unix terminals. They were built like a brick shithouse (to use the australianism) but were a nice machine. I did the acceptance testing, it included running up X11 and compiling and running the largest Conways game of life design I could find on the net. Seemed to run well.
We got the machine as a tax-offset for a large Boeing purchase by Australian defence. End of life, one of the operators got the love-seat and turned it into a wardrobe in his bedroom.
Another, more boring cray got installed at department of primary industries (Qld government) to do crops and weather modelling. The post cray-1 stuff was .. more ordinary. Circular compute unit was a moment in time.
400 Hz is really the next best thing to a switching supply, as the transformers and filter capacitors can be smaller than they would need to be at 50/60 Hz. It can save cost and space for filter capacitors, especially in a three-phase system where there's not as much ripple to deal with.
Another rationale may have been that the flywheel on the motor-generator would cover a multitude of power-quality sins.
> the main bit ran off something like 400hz power, for some reason the CPU needed faster mains volts going in.
Aerospace originally did that to reduce component size, CDC and IBM took advantage of the standard in the early 60's.
Strangely, it seems mainframes didn't adopt switching power supplies until the end of the 70's, despite the tech being around since the end of the 60's.
I used a Cray C-90 and T3D 256-core machine from 1995-1999. The T3D used commodity Alpha 21164, and was already behind the T3E when we got it (Cray refurbished.) By the end it was outclassed by an SGI Oxygen box with 8 CPUs. I’d already ported a lot of software from SunOS and HP-UX to Irix and Unicos (Cray) and it was easy to move it to Linux in the end.
There a lot of discussion here https://retrocomputing.stackexchange.com/questions/7412/why-... but nothing seems conclusive.. I would wager the last answer, "IBM was using 400Hz", to be most directly causal reason. The motor-generator configuration might provide galvanic isolation and some immunity to spikes and transients as well?
I knew a guy who worked at one of the national labs that had its own Cray supercomputer, in a computer room with a big observation window that visitors could admire it through, of course (all Crays required observations windows to show them off).
Just before a tour group came by, he hid inside the Cray, and waited for them to arrive. Then he casually strolled out from the back of the Cray, pulling up the zipper of his jeans, with a sheepish relieved expression on his face, looked up and saw the tour group, acted startled, then scurried away.
The Cray 1-ish machines I had access to at Shell and Chevron were most definitely tucked away in rooms with no visibility into them. In fact the Chevron machine room had pretty stern "no photography" placards, which I took seriously and is sadly why I don't have a photo of sitting on the loveseat of their machine.
Getting access took just short an act of God and I was a sysadmin in the central support group! They didn't want us puttering on the machines, so as far as I could tell it mostly sat idle.
I was part of a group from the University of Minnesota that traveled to Chippewa Falls to tour the Cray plant. I expected to see exactly what you describe, the computer behind a big observation window. But instead they took us to a machine that was undergoing final testing before delivery, I think it was serial #5. They were extremely proud of their cooling and invited us to put our hands on the panels in the opening of the C to see how cool it was. I still freak out over thinking about what would have happened if someone had tripped and fallen into all those wires on the inside.
I was a heavy user of the Cray-1, Cray-XMP and Cray-2 at the magnetic fusion computing center at Lawrence Livermore National Lab. One of the important things to remember was that the Crays of this generation were vector processors, that is they carried out operations across multiple elements of arrays at once. If you didn't structure your code properly, you didn't see these extraordinarily high instruction rates.
And, yes, quite some time ago I noticed that my cell phone had surpassed the capabilities of these early Crays :)
The CRAY-1 was so ridiculously ahead of its time that it took until the Pentium MMX (1997) for “ordinary” computers to catch up to its raw performance.
That’s 20 years or about 10,000X the available VLSI transistors via Moore’s Law.
> The CRAY-1 was so ridiculously ahead of its time that it took until the Pentium MMX ...
You'd need a different comparison to show how the Cray-1 was special. If the comparison is to single commodity CPUs, like the Pentium MMX, you could make much the same comparison for many mainframes and supercomputers. Several supercomputers in the 1980s exceeded 1 GFLOP, for example, and it wasn't until the 2000s that you could get commodity CPUs with that performance.
Inmos not imos, if my memory cells serve ne correctly. I lived overseas at the time, so I did not hear about the Cray till like 1980/81. My friend (we were like 12) had an idea to write a simulator for digital circuits, and I was puzzled as to why you would want to simulate a circuit when you can build it and test it. He was way ahead of his time.
If these kinds of machines interest you I highly recommend the book "The Supermen" by Charles Murray. It has all the details you would ever want on Seymour Cray and others in the business.
I was working at a geophysical company in the 80's and we lusted after a Cray-1. Best we could afford where array processors (CSPI) connected to VAX systems.
> The aesthetics of the machine have not been neglected. The CPU is attractively housed in a cylindrical cabinet. The chassis are arranged two per each of the twelve wedge-shaped columns. At the base are the twelve power supplies. The power supply cabinets, which extend outward from the base are vinyl padded to provide seating for computer personnel.
According to this link https://www.datacenterdynamics.com/en/news/cray-1-supercompu... these things retailed for $8 million (in 1975 dollars!). That's almost $50,000,000 today - I can't even imagine how much work went into justifying a purchase like that.
Before IBM's deep blue, apparently there was a chess program called Cray Blitz, I'm sure the cray had the raw power(20MHz? But with 64 parallel operations) needed to beat the top humans, but the software just wasn't there yet.
Blitz was ported to C and continued development FOSS as Crafty, mainly by a U of Alabama professor, but to this day it can't beat top humans on modern cpu (topping at 2650 elo instead of 2850 of, say, carlsen.)
The latest version of Crafty has a significantly higher rating on CCRL than Fritz 10, the version that defeated Kramnik in 2006. He was the World Champion and was rated 2750 at the time. I do not know what source you used for Crafty’s rating but ratings from different lists are not comparable. It is highly probable that Crafty running on a Ryzen could defeat any human.
I am also of the opinion that with an optimised program the CRAY-1 would have been on par with Karpov and Fischer. I also think that Stockfish or some other strong program running on an original Pentium could be on par with Carlsen. I am not sure if Crafty’s licence would count as FOSS.
Compare this to modern system: we could fully simulate this kind of slow computer in real time, I would guess, with all the wheels, clockwork and mechanics as well as logic circuits with full electricity. Deciding the simulation level would be a bit challenging - is this an atomic, an electron or more common sense simulation -; still I think the simulation would work as well as this kind of document.
Absolutely not on the electrical level. ISA emulation, probably, maybe, though these Crays are not IEEE machines so you have to actually fully emulate the floating point operations, instead of punting them to the FPU. That may make getting hundreds of MFLOPs on difficult.
2. It’s a secure facility and wanted to prevent people from looking in.
3. To not have to look at something outside.
4. It’s a secure facility and wanted to prevent the chance of taking a picture of someone or something outside that could compromise the location of the computer or someone’s identity; sometimes the first place a photogenic computer was built was at a customer site.
I was able to pick up recycled flourinert, which I used for an immersed dual celeron setup. It was mind boggling to see the submerged motherboard chugging away and silence beyond the the soft whir/gurgle of the water pumps. My first CRAYon machine was so messy. I always hoped that it was coolant from our U of MN's Cray.
In college I had an account on our ACM chapter's DEC Alpha, which I used primarily for mudding. Its DNS name was cray-ymp.xxx.xxx.edu, which resulted in more than a few moments of shock/consternation from mods. "You're mudding from a CRAY Y-MP???"
"Seymour said he thought it was odd that Apple bought a Cray to design Macs because he was using Macs to design Crays. He sent me his designs for the Cray 3 in MacDraw on a floppy.” reports KentK.
Cray himself. Here's a talk about designing it. My favorite part is his description of the aluminum machining they had to invent in order to move the freon through the frame to keep the machine from melting. It's a great talk.
In the past few years whenever I re-watch 2001 when Dave is shutting down HAL, I see a spaceship capable data center. And HAL sings "Daisy.." finally at the foundational, bare metal layer.
I remember doing a report on this in high school in the late 80s. I'd love to do an order of magnitude comparison to a modern M4 Mac... Amazing how far we've come.
I just did a BOTE calculation for my iPhone (A17 Pro chip; GPU rated at 4 Tflops). According to the sales blurbage in TFA, the Cray 1 performed at 80 Mflops. (Yes, that is OBVIOUSLY not comparing apples to Apples -- pun intended). Unless I've dropped a decimal point, my iPhone is (capable of) 50,000 times the floating point speed of a Cray 1.
I remember computer magazines of the time talking about "a Cray on a chip" in their April 1st jokes.
Well... we're there. Far past, in fact. We live in the future that then was so far out of reach that people could only joke about it, not consider it a realistic possibility.
The "R exp" is subroutine call (which saves return address to register B00), and I believe "J Bjk" is the subroutine return.
The Cray-1 didn't have a hardware stack, so subroutine call is basically just jump there and back, using a register for the return address rather than pushing/popping it to/from the stack.
Another oddity of the instruction set that stands out (since I'm in process of defining a VM ISA for a hobby project) is that the branch instructions test a register (A0 or S0) rather than look at status flags. In a modern CPU a conditional branch, if (x < y), is implemented by compare then branch where the compare instruction sets flags as if it had done a subtraction, but doesn't actually modify the accumulator. In the Cray this is evidentially done by doing an actual subtraction, leaving the result in A0, then branching by looking at the value of A0 (vs looking at flags set by CMP).
The following quote gives some sense of how "manual" this was:
> "On execution of the return jump instruction (007), register Boo is set to the next instruction parcel address (P) and a branch to an address specified by ijkm occurs. Upon receiving control, the called routine will conventionally save (Boo) so that the Boo register will be free for the called routine to initiate return jumps of its own. When a called routine wishes to return to its caller, it restores the saved address and executes a 005 instruction. This instruction, which is a branch to (Bjk), causes the address saved in Bjk to be entered into P as the address of the next instruction parcel to be executed."
Details were up to the compiler that produced the machine code.
ggm|1 month ago
The flourinert tank has a ball valve, like a toilet cistern. we hung a plastic lobster in ours, because we called the cray "Yabbie" (Queensland freshwater crayfish)
That re-generator, the circuit breakers are .. touchy. the installation engineer nearly wet his trousers flipping on, the spark-bang was immense. Brown trouser moment.
The front end access was Unisys X11 Unix terminals. They were built like a brick shithouse (to use the australianism) but were a nice machine. I did the acceptance testing, it included running up X11 and compiling and running the largest Conways game of life design I could find on the net. Seemed to run well.
We got the machine as a tax-offset for a large Boeing purchase by Australian defence. End of life, one of the operators got the love-seat and turned it into a wardrobe in his bedroom.
Another, more boring cray got installed at department of primary industries (Qld government) to do crops and weather modelling. The post cray-1 stuff was .. more ordinary. Circular compute unit was a moment in time.
(I think I've posted most of this to HN before)
CamperBob2|1 month ago
Another rationale may have been that the flywheel on the motor-generator would cover a multitude of power-quality sins.
bilegeek|1 month ago
Aerospace originally did that to reduce component size, CDC and IBM took advantage of the standard in the early 60's.
Strangely, it seems mainframes didn't adopt switching power supplies until the end of the 70's, despite the tech being around since the end of the 60's.
FarmerPotato|1 month ago
kev009|1 month ago
Polizeiposaune|1 month ago
markus_zhang|1 month ago
DonHopkins|1 month ago
Just before a tour group came by, he hid inside the Cray, and waited for them to arrive. Then he casually strolled out from the back of the Cray, pulling up the zipper of his jeans, with a sheepish relieved expression on his face, looked up and saw the tour group, acted startled, then scurried away.
technofiend|1 month ago
Getting access took just short an act of God and I was a sysadmin in the central support group! They didn't want us puttering on the machines, so as far as I could tell it mostly sat idle.
mark-r|1 month ago
AKluge|1 month ago
And, yes, quite some time ago I noticed that my cell phone had surpassed the capabilities of these early Crays :)
twoodfin|1 month ago
That’s 20 years or about 10,000X the available VLSI transistors via Moore’s Law.
chasil|1 month ago
https://en.wikipedia.org/wiki/Seymour_Cray
firecall|1 month ago
Sometimes I like to remind myself we are living in the future. A future that seemed like SciFi when I was a kid in the 70s!
Sadly I don’t think we will ever see Warp Drives, Time Travel or World Peace. But we might get Jet Packs!
antonvs|1 month ago
You'd need a different comparison to show how the Cray-1 was special. If the comparison is to single commodity CPUs, like the Pentium MMX, you could make much the same comparison for many mainframes and supercomputers. Several supercomputers in the 1980s exceeded 1 GFLOP, for example, and it wasn't until the 2000s that you could get commodity CPUs with that performance.
fnord77|1 month ago
effnorwood|1 month ago
HarHarVeryFunny|1 month ago
The Transputer's inter-chip channel connections remind a bit of Nvidia's NVLink or AMD's Infinity Fabric.
SirIsaacGluten|1 month ago
lukeh|1 month ago
W-Stool|1 month ago
I was working at a geophysical company in the 80's and we lusted after a Cray-1. Best we could afford where array processors (CSPI) connected to VAX systems.
october8140|1 month ago
t1234s|1 month ago
postexitus|1 month ago
rabbit817|1 month ago
[deleted]
5-|1 month ago
i don't think there is a comparable book about the cray-1?
drpixie|1 month ago
zippyman55|1 month ago
jeffrallen|1 month ago
commandlinefan|1 month ago
TZubiri|1 month ago
Blitz was ported to C and continued development FOSS as Crafty, mainly by a U of Alabama professor, but to this day it can't beat top humans on modern cpu (topping at 2650 elo instead of 2850 of, say, carlsen.)
Mr_Minderbinder|1 month ago
I am also of the opinion that with an optimised program the CRAY-1 would have been on par with Karpov and Fischer. I also think that Stockfish or some other strong program running on an original Pentium could be on par with Carlsen. I am not sure if Crafty’s licence would count as FOSS.
tvali|1 month ago
formerly_proven|1 month ago
tvguide61|1 month ago
Thoughts:
1. To block some sunlight from getting in.
2. It’s a secure facility and wanted to prevent people from looking in.
3. To not have to look at something outside.
4. It’s a secure facility and wanted to prevent the chance of taking a picture of someone or something outside that could compromise the location of the computer or someone’s identity; sometimes the first place a photogenic computer was built was at a customer site.
fredoralive|1 month ago
As for windows in a computer room, seems a bit unusual, but a nicer working environment than the usual windowless box I'd guess...
estimator7292|1 month ago
ssl-3|1 month ago
heelix|1 month ago
LgWoodenBadger|1 month ago
bytesandbits|1 month ago
ckastner|1 month ago
"Seymour said he thought it was odd that Apple bought a Cray to design Macs because he was using Macs to design Crays. He sent me his designs for the Cray 3 in MacDraw on a floppy.” reports KentK.
https://cray-history.net/2021/07/16/apple-computer-and-cray-...
themafia|1 month ago
https://www.youtube.com/watch?v=vtOA1vuoDgQ
idatum|1 month ago
feurio|1 month ago
(emphasis mine)
DonHopkins|1 month ago
awacs|1 month ago
fghorow|1 month ago
In my back pocket. To watch cat videos.
AnimalMuppet|1 month ago
Well... we're there. Far past, in fact. We live in the future that then was so far out of reach that people could only joke about it, not consider it a realistic possibility.
In one lifetime.
hulitu|1 month ago
Yes, would be nice to compare the capabilities (multiuser, multitasking, security, RCE). Did we get _so_ far ? How many users can a Mac sustain ?
TheOtherHobbes|1 month ago
M4 CPU - 280 GFLOPS
M4 GPU - 2900 GFLOPS
qingcharles|1 month ago
I know a couple of museums have them, but I don't think any software has ever surfaced, am I right?
mark-r|1 month ago
pinewurst|1 month ago
jgalt212|1 month ago
lebuffon|1 month ago
Did I miss it?
HarHarVeryFunny|1 month ago
The Cray-1 didn't have a hardware stack, so subroutine call is basically just jump there and back, using a register for the return address rather than pushing/popping it to/from the stack.
Another oddity of the instruction set that stands out (since I'm in process of defining a VM ISA for a hobby project) is that the branch instructions test a register (A0 or S0) rather than look at status flags. In a modern CPU a conditional branch, if (x < y), is implemented by compare then branch where the compare instruction sets flags as if it had done a subtraction, but doesn't actually modify the accumulator. In the Cray this is evidentially done by doing an actual subtraction, leaving the result in A0, then branching by looking at the value of A0 (vs looking at flags set by CMP).
Gemini explains this as being to help pipelining.
antonvs|1 month ago
There's some more detail here: https://ed-thelen.org/comp-hist/CRAY-1-HardRefMan/CRAY-1-HRM...
The following quote gives some sense of how "manual" this was:
> "On execution of the return jump instruction (007), register Boo is set to the next instruction parcel address (P) and a branch to an address specified by ijkm occurs. Upon receiving control, the called routine will conventionally save (Boo) so that the Boo register will be free for the called routine to initiate return jumps of its own. When a called routine wishes to return to its caller, it restores the saved address and executes a 005 instruction. This instruction, which is a branch to (Bjk), causes the address saved in Bjk to be entered into P as the address of the next instruction parcel to be executed."
Details were up to the compiler that produced the machine code.
NL807|1 month ago
csmoak|1 month ago
dongecko|1 month ago
LordGrey|1 month ago
[deleted]
unknown|1 month ago
[deleted]