The thing that people might find hard to understand these days is that computers - particularly mainframe or minicomputers - were really unreliable. The hardware was often/always custom built and it would often just stop and the manufacturer had to come in to work out what was wrong, and fix it which typically involved replacing boards until the problem went away.
I should note that I'm basing this experience of being involved with company in the late 1980's - not the 1940's which is the topic of this post. Although I can imagine that back then the machines would only run for a matter of hours at best before breaking down and needing debugging.
It was the same with networking which was particularly unreliable when it ran over coaxial cables like RG58 which had an array of problems that would lead to the whole company network going down and taking all the connected PCs with it.
My mother was a programmer in the 1950s. Someone came in early every morning and ran memory tests. If a tray of memory failed, they replaced it so that the computer could run while a tech debugged the failing tray. (A tray was 100 words by 36 bits of tube memory.)
We had our monthly "Technology Exchange" meeting at work today (internal lighting talks and short tech talks) and one of the presentations was an interview with a guy who is about to retire after working for our little unit for the last 40 years. When he started, the 6 or 7 programmers had to share 3 terminals (so you wrote you programs on paper before you typed them in). They thought it was great because they didn't have to punch cards anymore. They had a little mini computer in University Hall (across the street from the Berkeley campus) that was the size of a hardware rack where they could test some of their PL/I code -- but had to dial up on a modem to the mainframe down in Westwood (UCLA) to actually run the jobs (he said that computer took up a whole room). The original project for our group was to produce an annual union catalog of the UC libraries holdings on microfiche.
We also had the first network that connected all the campuses, with microwave line of sight to mount zion (UCSF) and satellite dishes. Someone had to write 75k lines of assembly to implement the TCP/IP stack for the mainframe.
Was RG58 the stuff that had the "vampire taps"? We still had some of that when I started at the libraries in 1996.
The early, first generation tube-based computers had tube failures more or less daily. Higher reliability tubes specifically intended for computers were eventually introduced. One trick was to run the tube filaments "derated", or under the rated voltage. The tubes worked fine as logic elements at the lower voltage although that probably messed up their normal response curves. When I saw the Colossus rebuild at Bletchley Park, I asked about tube failure and they confirmed that they run the tubes (or valves, as they call them) on the Colossus at derated voltage, greatly reducing the failure rate.
I sort of agree with that, though I'd say there was a peak that passed by somewhere in the 90's. There was a time when the most important compute was on high end mainframes. They were more reliable than what we have now.
An interesting fact of ENIAC is, although it was originally programmed by plugboards in 1946, it was soon retrofitted to a stored-program computer in 1948 to simplify programming, using its spare function table units as ROM, and its extra accumulators as a program counter and a pointer. I wonder if there's any project to recreate an ENIAC simulation in stored-program mode.
The name "MANIAC" (Mathematical Analyzer, Numerical Integrator, and Computer or Mathematical Analyzer, Numerator, Integrator, and Computer, https://en.wikipedia.org/wiki/MANIAC_I) was eventually picked up by physicist Nicholas Metropolis to name a computer designed under his leadership, as an attempt to ridicule and stop the rash of silly acronyms for machine names, such as ENIAC, EDVAC, UNIVAC, etc.
a programmer from Huntsville, Alabama started in 1963, and later worked on Burroughs equipment, and much later on MP/M, then Apple. Quite a clever fellow, many great stories.. hardware would certainly be down from time-to-time.
[+] [-] andrewstuart|6 years ago|reply
I should note that I'm basing this experience of being involved with company in the late 1980's - not the 1940's which is the topic of this post. Although I can imagine that back then the machines would only run for a matter of hours at best before breaking down and needing debugging.
It was the same with networking which was particularly unreliable when it ran over coaxial cables like RG58 which had an array of problems that would lead to the whole company network going down and taking all the connected PCs with it.
Modern computing is incredibly reliable.
[+] [-] AnimalMuppet|6 years ago|reply
[+] [-] tingletech|6 years ago|reply
We also had the first network that connected all the campuses, with microwave line of sight to mount zion (UCSF) and satellite dishes. Someone had to write 75k lines of assembly to implement the TCP/IP stack for the mainframe.
Was RG58 the stuff that had the "vampire taps"? We still had some of that when I started at the libraries in 1996.
[+] [-] rootbear|6 years ago|reply
[+] [-] bitwize|6 years ago|reply
Indeed, and that sometimes entailed the removal of actual bugs: https://upload.wikimedia.org/wikipedia/commons/8/8a/H96566k....
[+] [-] tyingq|6 years ago|reply
I sort of agree with that, though I'd say there was a peak that passed by somewhere in the 90's. There was a time when the most important compute was on high end mainframes. They were more reliable than what we have now.
[+] [-] Mxtetris|6 years ago|reply
[+] [-] segfaultbuserr|6 years ago|reply
[+] [-] patkai|6 years ago|reply
[+] [-] segfaultbuserr|6 years ago|reply
[+] [-] mistrial9|6 years ago|reply