Using floppy disks and old hardware and software doesn't sound like a problem if it still runs and does what it's supposed to do. I'm skeptical that building a modern system would really save money since the temptation for feature creep is too great.
> Using floppy disks and old hardware and software doesn't sound like a problem if it still runs and does what it's supposed to do.
It sounds like a problem to me for the simple fact that replacement parts are near impossible to find, and it's clearly costing the taxpayer a lot of money. That alone should be reason enough to upgrade.
It's also a huge security problem because many of these machines were designed before modern security procedures were invented. How am I supposed to maintain a cryptographically secure password system on a machine with a processor too slow to run a hashing algorithm?
> I'm skeptical that building a modern system would really save money since the temptation for feature creep is too great.
So, because we might be tempted to add a few features, we shouldn't upgrade technology? I don't understand where this weird pseudo-luddite mentality in the tech world comes from. I see it all the time in tech forums, and I just don't get it.
"If it's not broke, don't fix it" is a fun, catchy phrase, but it really breaks down as soon as you try to apply it anywhere. "Hey, you should really change the oil in your car." "If it's not broke, don't fix it."
Also, replace mentally replace every instance of "museum ready" in this article with "mission critical" (which most of them probably are), and it'll seem a lot less ridiculous that they maintain them.
I think the article is pretty light on the specifics.
There are parts of technology where you're absolutely right - if some mission-critical system is running on some old circuit, but it works reliably and maintenance is feasible, don't change it!
But other pieces of tech include the Windows 95 computer that is the only one anyone can do _____ with, because of some complex system that only runs on it. And anytime you do _____, you need to copy your data onto a file, and exclude some specifically formatted config file that you write in notepad, in order to get it done. And the whole system is a small project that could be feasibly implemented quickly and cheaply to run on modern computers.
So there are two sides to this, and arguing absolutes doesn't get us much closer to the truth.
From the article, it mentions that Social Security has a variety of legacy systems, and updates the ones that it thinks are the slowest and costliest. Which is the right way to think about it, so long as they have the technical expertise to make those judgements correctly.
Agreed. Plus, even if it is decided to modernize any of these systems, we would likely have to run an old system with the new one in parallel for quite some time to ensure a properly smooth transition. Air traffic control is an example of where such practices occur.
Even if the software never needs an update (possible for isolated process-control systems, less so for central record-keeping at the IRS), there's hardware to consider -- where, nowadays, do you get a replacement 8-inch floppy drive? Or, say, new read heads for one?
Yeah but what happens further down the road when it doesn't run well anymore? My old company was facing issues where they were struggling to find SMEs for legacy systems
The real problem is there's huge, legacy systems tied to these platforms that they don't understand and are too risky to port/re-engineer. Think of our military systems or payroll going down since software was ported wrong or relied on underdocumented assembler or compiler feature.
There's some hope on reducing costs at least. Look up NuVAX for an example of emulators being designed to work exacly like old hardware for fraction of price, space, energy, and so on. I haven't heard attempts but next step might be instrumenting them to trace programs code/data for porting. Or binary translation to modern architectures. I know DEC did latter for VAX-to-Alpha port.
NuVAX or equivalent if you need real hardware compatibility and physical to virtual for everything else. Seems like a golden consulting opportunity to leverage SIMH.
The private sector usually manages to keep it's systems updated. This here seems like the typical incentive and accountability problems of government bureaucracy.
Most federal systems that are anywhere over ten years old (and that's most of them) are complete mysteries to the people who both use and maintain them.
A long time ago, I was responsible for such a system. I didn't ask for the job; I simply was the smartest person in the room for too long.
I vividly remember one day we had a problem with folks in a remote location entering things and those things getting mangled and/or lost on the way to the system-of-record.
For one system, with maybe five thousand users and perhaps a few gigabytes of traffic a month, I was on a call with 30 people spanning most of the Earth. I learned that there were at least a dozen separate systems at that location between the person entering the data and the data being sent to HQ. Each system was old. Each system had a separate vendor which claimed to be the only vendor to understand that system (Sometimes this was true. Many times they were just bluffing.)
And -- and this was the kicker -- for each of our dozens of locations, each location manager, because of their friendship with politicians, made their own decisions about how machines were configured and which programs were installed. They were complaining to us because things were bad, but they did not feel like they answered to us.
I was responsible for fixing it.
At the end of that call, I was reminded of Arthur C. Clarke's quip: Any sufficiently advanced technology is indistinguishable from magic.
But I doubt I thought of it in the way he meant it.
To be fair, my last employer (aerospace manuf.) ran an incredibly dated and ancient OS with pretty decent results. It was simple, to the point, ugly as hell but got the job done without needing constant updates etc etc. Also we never had a problem with malware (because who writes malware for a 30 year old OS?)
I understand this article mentions many different sectors and functions for antiquated systems but sometimes an update simply isn't needed.
I worked, once, at a large, national (US) company that will not be named.
They had an old database from the early 70s that stored _all_ of their data, everything, contacts, billing, etc.
That was accessible through a special proprietary program, that was overseen by one college kid after the rest of his team was let go.
That proprietary program was essentially an old DS prompt, which was connected to via a Java applet for an early version of Internet Explorer < 7 that emulated the old dos-like user interface.
That Java Applet was connected to via a Java web service, running on machines that had the java applet and ie installed.
The Java web service was accessed by tens, possibly a hundred thousand people nationally, through a few different interfaces.
The one I was aware of (probably the largest), connected to the Java Web service, through a C# WCF service.
The C# WCF service was built as a backend for a new javascript / html4 front end for their company.
That new UI, was intended to partially (BUT ONLY PARTIALLY) replace a strange, 100% actionscript web ui made previously.
Learning about their system architecture, was quite literally like stepping back through time. I felt like an archaeologist, uncovering layers of an ancient city.
It was also amazing how many people were employed supporting each system, each of which could have replaced the lower tiers if they had just upgraded at the time.
It was similarly amazing, how the company had laid off everyone at entire layers, when management arbitrarily decided they needed to make cuts, while other layers doing the _exact_same_thing were staffing huge quantities of people, completely unaware that a single bug in the layer beneath them was not being overseen, or supported, and could bring the whole house down at any time.
I remember reading a counterpoint about this a while back -- sometimes for critical systems, the risk of updates is really high.
For example, NASA still uses hardened 808x systems. On top of that, for space-based systems in an ionized environment, the risk of having hardware developing hardware faults is non-zero, and the kind of things people do for error correction in that environment is insane.
The flip side of this is that, when a technology is widely used, there is a scaling that happens. If you are the sole user of a technology, then part of your cost is maintaining that technology that used to be shared by all the other users of that technology.
And then there's the bigger question: how effective is nuclear deterrence? And I don't mean for the United States of America vs. the rest of the world. I mean for the global, human civilization, and homo sapiens as a species.
An old joke about the pentagon goes along the lines of "we don't need to worry about security because our systems are too old, rare and proprietary to find."
As someone else said, security via age and rarity.
On the other hand, they're getting decades out of the software. Use the bleeding-edge stuff, and it's obsolete in two or three years now. Use a "cloud" service, and the service probably goes away within five years. The new stuff has too much churn. Where will Rails be in ten years? Python? Java will still be around; it's the successor to COBOL.
Indeed. Basically anything in the JS world, except maybe nodejs, will vanish in 5 years. (I'm looking at you, bower, gulp, EmberJS, even the giant dinosaurus jQuery is on the decline, given everyone and their dogs shift to Angular)
Seriously, if one is after long-lived software, choose a solid PHP framework (Drupal 7, for example, is around for 5 years now and probably will be supported for two or three years) or a Java framework, together with a rock-solid database (i.e. no NoSQL crap)...
Hate PHP and Java all you want, but these two languages put backwards compatibility as priority.
Nassim Taleb talks about this a tautology that is reflective of this concept in Antifragile -- basically says that technologies that have been around for "x" years are likely to remain in use for another "x" years. Think about the wheel. Or paper. Or (as cited above) COBOL. Interesting food for thought. I use this when talking to folks who insist that client-side file storage is going away (e.g. "but everything is going to move to the cloud!"). Explicit filenames on the desktop have been around for decades -- and are likely to remain a fundamental part of our system structures for a long time to come (though are likely to be joined by documents stored in the cloud).
They're getting decades out of the software, but "about three-fourths of the $80 billion budget goes to keep aging technology running". Is that worth it to you?
Yeah, like all the Python code I've written is just going to expire in 10 years. Popular languages rarely ever die, there is clearly an extraordinarily long tail
I wonder, what would be your answer is someone came up to you and said "We need a computer that we can maintain and keep in service for 50 years or more. What should we do?".
Take the IBM route. Write to a VM spec. Hardware comes and goes, but VMs last forever. The System 38's virtual architecture is still used on modern AS/400 machines, so programs written in the late 70s will still work today without needing to lift a finger.
If you know up front that it needs to last for 50 years, and the requirements are unlikely to change, it's easy to justify the expense of getting lots of spare hardware to be able to handle component failure. That's almost certainly going to be cheaper than trying to port the software forward every decade as technology changes.
Besides that, use open protocols and open source for the whole stack. Use open source hardware too, but don't depend on being able to order new hardware. Processes could change (we could move away from silicon to something else), or file formats for schematics can change so that you can't find anyone to produce the product.
Also helpful would be to target a common VM like the JVM instead of relying on hardware-specific abilities.
I've read that the answer to this question is why a large part of our military-industrial complex exists - the US government gave Lockheed and others 50+ year long non-cancellable contracts to build equipment for the government back in WW2 because they couldn't justify the business risk (similar to why Fannie Mae and FMAC were created) and it was deemed in the government's "best interest" to keep these companies afloat in some capacity. Aircraft carriers we don't need, guns that are long obsolete, tanks that the Pentagon doesn't want anymore - the list goes on with the excess manufacturing spending the government seems to be roped into buying because of... "reasons."
I've never read through those contracts myself but I would hope that they're public record if they're so old and important.
That's really the best way to do it; you have to own (or control) as much of the supply chain as you can. Not really practical if you're a private company who has to compete on thin margins, but not really that hard to imagine for a government.
As far as impracticality arguments: keep in mind that each government that maintains a nuclear arsenal (rogue/client states excepted) already has a completely captive technological supply chain, generally at least at a 1950s level, in order to manufacture the weapons in the first place. So if you wanted to build secure nuclear C2 systems, it would make sense to basically build them with the same degree of security, and using the same presumptively-secure supply chains, that are used to create the weapons themselves.
Honestly, I don't see good solution to this until the rate of technological change really slows down. It seems like your options are to either pay to periodically re-engineer every system or pay to maintain obsolete hardware.
anecdotal, in the early to mid eighties I was in the US Air Force. The machine I was first assigned to watch over was in the secure comm center. This burroughs machine was the first non-tube computer made by burroughs. it could boot from paper tape or card and was replete with blinking lights.
later I moved up in tech to a sperry/unisys system. all our personnel data and such was loaded via cards, physical cards in multiple boxes till near 88.
So honestly I don't doubt they still do similar. I was just so glad we got out of boxes of cards because having to fix runs each night got old and all for bent card.
it got me into programming, turbo pascal at the time. why, when we moved off physical cards it was then onto 360k floppies. The problem was, the upload/download programs provided could take half an hour or more to transfer to the 1100/70. The turbo pascal program did it in five or less per disk without issue.
On one hand it seems inefficient and perhaps dangerous to be reliant on such old systems. On the other hand, the idea of a new software project to replace it also sounds at risk of being extremely expensive and overly complicated. Because of all the government contracting anti-patterns.
In theory there's a middle ground that avoids both these extremes. In reality, with government software... I'm skeptical it will happen.
I want to know only one thing: COBOL is named under Social Security so I suspect the "outdated computer language that is difficult to write and maintain" Treasury uses is not COBOL -- but oh god then what it is??
Dating from 1960 ("the systems are about 56 years old"), and especially given that the system in question is likely an IBM mainframe, Fortran would be my guess.
Fortran can actually be surprisingly pleasant, at least as pleasant as C, but I'm guessing their particular code is not.
One of the many other articles that have been floating around the past few days mentioned that Treasury has a bunch of programs written in an old IBM architecture's version of assembler.
[+] [-] Johnny555|9 years ago|reply
[+] [-] rosalinekarr|9 years ago|reply
It sounds like a problem to me for the simple fact that replacement parts are near impossible to find, and it's clearly costing the taxpayer a lot of money. That alone should be reason enough to upgrade.
It's also a huge security problem because many of these machines were designed before modern security procedures were invented. How am I supposed to maintain a cryptographically secure password system on a machine with a processor too slow to run a hashing algorithm?
> I'm skeptical that building a modern system would really save money since the temptation for feature creep is too great.
So, because we might be tempted to add a few features, we shouldn't upgrade technology? I don't understand where this weird pseudo-luddite mentality in the tech world comes from. I see it all the time in tech forums, and I just don't get it.
"If it's not broke, don't fix it" is a fun, catchy phrase, but it really breaks down as soon as you try to apply it anywhere. "Hey, you should really change the oil in your car." "If it's not broke, don't fix it."
[+] [-] danjayh|9 years ago|reply
[+] [-] clay_to_n|9 years ago|reply
There are parts of technology where you're absolutely right - if some mission-critical system is running on some old circuit, but it works reliably and maintenance is feasible, don't change it!
But other pieces of tech include the Windows 95 computer that is the only one anyone can do _____ with, because of some complex system that only runs on it. And anytime you do _____, you need to copy your data onto a file, and exclude some specifically formatted config file that you write in notepad, in order to get it done. And the whole system is a small project that could be feasibly implemented quickly and cheaply to run on modern computers.
So there are two sides to this, and arguing absolutes doesn't get us much closer to the truth.
From the article, it mentions that Social Security has a variety of legacy systems, and updates the ones that it thinks are the slowest and costliest. Which is the right way to think about it, so long as they have the technical expertise to make those judgements correctly.
[+] [-] colinthompson|9 years ago|reply
[+] [-] rst|9 years ago|reply
[+] [-] beedogs|9 years ago|reply
That's great reasoning, until the day it stops working.
[+] [-] lavezzi|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] nickpsecurity|9 years ago|reply
There's some hope on reducing costs at least. Look up NuVAX for an example of emulators being designed to work exacly like old hardware for fraction of price, space, energy, and so on. I haven't heard attempts but next step might be instrumenting them to trace programs code/data for porting. Or binary translation to modern architectures. I know DEC did latter for VAX-to-Alpha port.
[+] [-] technofiend|9 years ago|reply
[+] [-] VMG|9 years ago|reply
[+] [-] DanielBMarkham|9 years ago|reply
A long time ago, I was responsible for such a system. I didn't ask for the job; I simply was the smartest person in the room for too long.
I vividly remember one day we had a problem with folks in a remote location entering things and those things getting mangled and/or lost on the way to the system-of-record.
For one system, with maybe five thousand users and perhaps a few gigabytes of traffic a month, I was on a call with 30 people spanning most of the Earth. I learned that there were at least a dozen separate systems at that location between the person entering the data and the data being sent to HQ. Each system was old. Each system had a separate vendor which claimed to be the only vendor to understand that system (Sometimes this was true. Many times they were just bluffing.)
And -- and this was the kicker -- for each of our dozens of locations, each location manager, because of their friendship with politicians, made their own decisions about how machines were configured and which programs were installed. They were complaining to us because things were bad, but they did not feel like they answered to us.
I was responsible for fixing it.
At the end of that call, I was reminded of Arthur C. Clarke's quip: Any sufficiently advanced technology is indistinguishable from magic.
But I doubt I thought of it in the way he meant it.
[+] [-] ramblenode|9 years ago|reply
[+] [-] paavokoya|9 years ago|reply
I understand this article mentions many different sectors and functions for antiquated systems but sometimes an update simply isn't needed.
[+] [-] ep103|9 years ago|reply
They had an old database from the early 70s that stored _all_ of their data, everything, contacts, billing, etc.
That was accessible through a special proprietary program, that was overseen by one college kid after the rest of his team was let go.
That proprietary program was essentially an old DS prompt, which was connected to via a Java applet for an early version of Internet Explorer < 7 that emulated the old dos-like user interface.
That Java Applet was connected to via a Java web service, running on machines that had the java applet and ie installed.
The Java web service was accessed by tens, possibly a hundred thousand people nationally, through a few different interfaces.
The one I was aware of (probably the largest), connected to the Java Web service, through a C# WCF service.
The C# WCF service was built as a backend for a new javascript / html4 front end for their company.
That new UI, was intended to partially (BUT ONLY PARTIALLY) replace a strange, 100% actionscript web ui made previously.
Learning about their system architecture, was quite literally like stepping back through time. I felt like an archaeologist, uncovering layers of an ancient city.
It was also amazing how many people were employed supporting each system, each of which could have replaced the lower tiers if they had just upgraded at the time.
It was similarly amazing, how the company had laid off everyone at entire layers, when management arbitrarily decided they needed to make cuts, while other layers doing the _exact_same_thing were staffing huge quantities of people, completely unaware that a single bug in the layer beneath them was not being overseen, or supported, and could bring the whole house down at any time.
[+] [-] themodelplumber|9 years ago|reply
Careful. I believe I just read an article (Wired?) about some hackers doing exactly this.
Security through Seniority might be a good name for the mindset. :-)
[+] [-] hosh|9 years ago|reply
For example, NASA still uses hardened 808x systems. On top of that, for space-based systems in an ionized environment, the risk of having hardware developing hardware faults is non-zero, and the kind of things people do for error correction in that environment is insane.
The flip side of this is that, when a technology is widely used, there is a scaling that happens. If you are the sole user of a technology, then part of your cost is maintaining that technology that used to be shared by all the other users of that technology.
And then there's the bigger question: how effective is nuclear deterrence? And I don't mean for the United States of America vs. the rest of the world. I mean for the global, human civilization, and homo sapiens as a species.
[+] [-] benologist|9 years ago|reply
[+] [-] hola_hola|9 years ago|reply
Sounds like a taunt. Almost makes me want to write some code
[+] [-] nashashmi|9 years ago|reply
As someone else said, security via age and rarity.
[+] [-] Animats|9 years ago|reply
[+] [-] c22|9 years ago|reply
[+] [-] mschuster91|9 years ago|reply
<3 <3 <3
Indeed. Basically anything in the JS world, except maybe nodejs, will vanish in 5 years. (I'm looking at you, bower, gulp, EmberJS, even the giant dinosaurus jQuery is on the decline, given everyone and their dogs shift to Angular)
Seriously, if one is after long-lived software, choose a solid PHP framework (Drupal 7, for example, is around for 5 years now and probably will be supported for two or three years) or a Java framework, together with a rock-solid database (i.e. no NoSQL crap)...
Hate PHP and Java all you want, but these two languages put backwards compatibility as priority.
[+] [-] SocksCanClose|9 years ago|reply
[+] [-] newnop|9 years ago|reply
[+] [-] orf|9 years ago|reply
[+] [-] driverdan|9 years ago|reply
[+] [-] protomyth|9 years ago|reply
[+] [-] Sanddancer|9 years ago|reply
[+] [-] jewel|9 years ago|reply
Besides that, use open protocols and open source for the whole stack. Use open source hardware too, but don't depend on being able to order new hardware. Processes could change (we could move away from silicon to something else), or file formats for schematics can change so that you can't find anyone to produce the product.
Also helpful would be to target a common VM like the JVM instead of relying on hardware-specific abilities.
[+] [-] ethbro|9 years ago|reply
[+] [-] devonkim|9 years ago|reply
I've never read through those contracts myself but I would hope that they're public record if they're so old and important.
[+] [-] Kadin|9 years ago|reply
That's really the best way to do it; you have to own (or control) as much of the supply chain as you can. Not really practical if you're a private company who has to compete on thin margins, but not really that hard to imagine for a government.
As far as impracticality arguments: keep in mind that each government that maintains a nuclear arsenal (rogue/client states excepted) already has a completely captive technological supply chain, generally at least at a 1950s level, in order to manufacture the weapons in the first place. So if you wanted to build secure nuclear C2 systems, it would make sense to basically build them with the same degree of security, and using the same presumptively-secure supply chains, that are used to create the weapons themselves.
[+] [-] gherkin0|9 years ago|reply
[+] [-] x0x0|9 years ago|reply
[+] [-] panic|9 years ago|reply
"Feds spend billions to enforce museum-ready laws"
These computer systems aren't even that old compared to many things the government spends money on.
[+] [-] Shivetya|9 years ago|reply
later I moved up in tech to a sperry/unisys system. all our personnel data and such was loaded via cards, physical cards in multiple boxes till near 88.
So honestly I don't doubt they still do similar. I was just so glad we got out of boxes of cards because having to fix runs each night got old and all for bent card.
it got me into programming, turbo pascal at the time. why, when we moved off physical cards it was then onto 360k floppies. The problem was, the upload/download programs provided could take half an hour or more to transfer to the 1100/70. The turbo pascal program did it in five or less per disk without issue.
[+] [-] syngrog66|9 years ago|reply
In theory there's a middle ground that avoids both these extremes. In reality, with government software... I'm skeptical it will happen.
[+] [-] chx|9 years ago|reply
[+] [-] fiatmoney|9 years ago|reply
Fortran can actually be surprisingly pleasant, at least as pleasant as C, but I'm guessing their particular code is not.
[+] [-] Kadin|9 years ago|reply
[+] [-] riprowan|9 years ago|reply
[+] [-] tn13|9 years ago|reply
If it is getting the job done cheaply and efficiently that required and better than the alternatives it is the best technology to use.
[+] [-] jenkstom|9 years ago|reply
[+] [-] JackPoach|9 years ago|reply