(no title)
aq9 | 3 years ago
For the type of organizations that run workloads on IBM mainframes, there are three drivers: * Legacy: The application was written for the mainframe, cannot run on anything else, too expensive in terms of dev and test time to re-platform * Business value: This is the big one; these workloads make their companies 100s of millions to billions of dollars per year. The price premium for running this on a mainframe is a rounding error. * Reliability: With the cloud, I hold the opinion that the average x86 application is less available/reliable than a well-run pre-cloud application (which already included HA, etc.). Mainframe apps and hardware blow all this out of the water.
FWIW: I programmed mainframes briefly early in my career, I am quite familiar with the ecosystem.
PaulHoule|3 years ago
Every calculation in the CPU is replicated. If it shows any sign of failure it will try to migrate threads off the failing CPU to other CPU.
DRAM is RAIDed.
There is a disaster recovery capability that can replicate several data centers within a 70 km range via optic fiber. If one of them burns, get flooded or hit with a nuke the others will pick up the slack automatically.
riskable|3 years ago
Not at the OS level. Back when I was doing penetration testing nearly every organization that had IBM mainframes would suffer pretty severe outages just from our basic scans and doing things like checking open ports. They were also super duper easy to break into 90% of the time.
Also, most of the software running on mainframes has been running for decades. Which means they had like 40+ years to work out all the bugs. I'm 100% certain that if you took any given "modern" software stack (take your pick!) and very carefully applied patches to it for 40 years without ever adding any major new features it would be equally as reliable.
ninefathom|3 years ago