top | item 6622035

“The real problems are with the back end of the software”

125 points| wwilson | 12 years ago |marginalrevolution.com | reply

148 comments

order
[+] jroseattle|12 years ago|reply
"The front end technology is not the problem here."

Let me fix that statement: "The front end technology is not the worst problem here."

Looking at the resources loaded for the sign-in page, I counted 58 separate Javascript files. Including one that implied by name was minified, which on inspection clearly was not. I didn't bother counting CSS or image resources. I returned to the page two days ago, which indicated it is down for scheduled maintenance. It remains in this state.

CGI obviously borked this project. The government deserves its own special classification of criticism, but poor planning, change management, etc. from the government is no excuse for CGI not building an architecturally sound web site.

The contract was $350 million? Good grief, they overpaid. Nonetheless, if we could go back in time AND assuming we needed to spend this budget, here's what I would have done:

1. We make investments of $15 million in 20 different startups, and tell them to implement the initial phase -- let's say we call it the "minimum viable product" or MVP. Each startup has the same deadline for delivery.

2. On the delivery date, all companies meet with us to review their MVP. We call it a "demo day" and view all 20 demos.

3. Through some set of criteria, we create a short list of five companies from the 20 demos. Those five companies receive an additional $5 million investment, and another delivery deadline.

4. The companies iterate on their MVP and come back for another demo, this time with a deep dive.

5. We pick a winner from those five. The winner gets another $25 million investment and is responsible for any additional work to be completed.

TechStars for government, essentially.

[+] GVIrish|12 years ago|reply
I don't see how giving 20 startups $15 million each would have led to success instead of failure. Even if you found a better company to do the implementation than CGI Federal, there were several enormous problems that no company would have had any control over.

1. Requirements were delayed so much that development didn't start until March of this year. That spells doom for a system with this kind of complexity, regardless of who the implementor is. You need several months of functional, load, and integration testing so that effectively means you would only have had 4 to 5 months to code healthcare.gov. And that's assuming there weren't any big requirements changes.

2. The people responsible for integration (Center for Medicare/Medicaid Services) had no large IT project integration experience. These are people that thought 1 week of full integration testing would be enough.

3. Healthcare.gov had to integrate with legacy systems from the IRS, Medicare/Medicaid, Social Security, in addition to the various state exchanges. Any number of those systems could have serious flaws that would make it extremely difficult to interface with. On any project a poorly implemented legacy system can dramatically affect the effort needed to be successful. Again, even the best companies would've had a sizable challenge dealing with that.

4. One of the biggest challenges in government IT is the customer. The government decision makers often don't know enough about software engineering to make sensible decisions on requirements, timelines, testing, you name it. In this case there was the added political pressure of, "This cannot fail," even though it should've been clear at least a year ago that there's no way they were going to make the deadline. But you get people who think that you can just deploy and fix it as you go along. Or you get people that think you can just add developers to make up for lost time.

5. And what does being a "startup" have to do with it anyway? Either a software company can do the work, or they can't. Whether they are a startup or an established entity really has nothing to do with it.

[+] hga|12 years ago|reply
While I'm sure you're partly correct, until we know how bad the CMS on up management was for this specifically, can we really say that for sure?

What we know is:

The project didn't get seriously started until February/March (e.g. the election created a 3+ month freeze on HHS publicly visible work).

The NYT reported that in the last 10 months, 7 major requirements changes were made.

We've been told the "no window shopping" one was made in August or September.

We've heard from multiple sources that changes were ordered through the week before launch.

Given all the above, how much do you see incompetence, and how much "just not done yet" pre-alpha stuff? I'm mostly a back end developer and am not up to date to judge this, I'm really interested if the above makes any refinements to your judgement.

[+] malandrew|12 years ago|reply
I'd add that all participating startups would also need to develop out in the open on something like github or bitbucket. This way we can publicly observe the progress and quality of the work from the 20 different startups and claw back any unused money from startups that are clearly going to fail on the way. Furthermore, we could mandate a requirement that all the code produced is effectively free and open source and that any of the 20 startups can appropriate particularly well built parts from each other.
[+] ams6110|12 years ago|reply
poor planning, change management, etc. from the government is no excuse for CGI not building an architecturally sound web site

In the world that most of us live in, this is true. But for companies like CGI their business is not really building architecturally sound systems, it is keeping the doors open to an endless stream of government contract dollars. When the government rewards failure by granting the same vendor another contract to fix the problems, the predictable happens.

[+] tunesmith|12 years ago|reply
Until you've got that legacy system that the project has to integrate with. At that point, the legacy system is dealing with twenty different new large projects trying to integrate with it, instead of one.

MVP doesn't work well with deep integration. You can break this down to a form that takes one input, and returns one result on a following page. From a UI perspective, this seems like one Agile story. But that one round-trip can spawn so many integration steps. I just got finished with a health care IT project like that. One round-trip step involved integration with a single-sign-on service (which needed to be reconfigured), a rickety SOAP service provider (which had limits in how many test boxes they could set up and was controlled by a different bureaucracy and needed approval processes to turn on each required API method), a separate box returning chunks of patient data wrapped in html (don't ask, this was again out of our division's control), and our own backend system through REST so our resultant data would not be stored on the same server as our webserver (cluster). If some of these backend servers were told "okay, you now have twenty implementations to deal with instead of one", it would have drastically reduced the probabilities of completion.

[+] lgieron|12 years ago|reply
Unfortunately, it can be hacked by taking the initial $15m, doing a totally half-asses effort (worth say $1m) and raking in the handsome profit of $14m.
[+] dreamfactory|12 years ago|reply
I don't know about US gov procurement but I do know a bit about European, which I don't think has a particularly better track record.

On these big gov projects you would not believe how terrifyingly thin the margins are for integrators due to politicians being very sensitive about being seen to be responsible with the public purse strings, particularly given that they don't understand the technology. In addition to the thin margins, you therefore also see extensive offshoring and very low blended rates.

The projects are still incredibly expensive in the end and barely perform because the cheap labour incurs massive technical debt - and the thin margins means that the integrators try to insulate their risk with many layers of project management, and huge amounts of rigid enterprise architecture and planning up front.

Given this mentality of cutting cost to the bone via forward planning, even proofs of concept are hard to get through, let alone investing in 15 prototypes. I think the way to do this would be as part of a gov tech investment scheme rather than attached to a specific project or program.

[+] chanux|12 years ago|reply
One of the questions I have is why/how no one came to think of the back end issues at early stage of the development. (I assume a $350 mil project involves a lot of experienced people.)

I have seen some infographic on how large the code base is and at the beginning of the parent article, I thought the guy is going to argue that the code base is huge because they had to circumvent/workaround the back end problems.

[+] waterside81|12 years ago|reply
Hyperbole aside ("... an act which would border on criminal negligence if it was done in the private sector and someone was harmed ..." - what does that even mean? So all of us who have shipped buggy software for our customers are borderline criminals?) - this doesn't surprise me in having dealt with the VA. They have legacy upon legacy upon legacy, with all sorts of fun limitations like not being able to have a "\t" in your content because that'll screw up their backend which relies on tab-delimited data. Health care in the US is playing catch up technology wise to almost every industry. And not for lack of technology, but for lack of political will power.

My favourite example of this was trying to deploy an app within the VA that was written in Django. I was told "Python is not on the list of acceptable languages." So we came back to them and said, "Good news everyone, we ported it to Java." Of course, it was just Jython, but that's the sort of stuff you encounter.

Multiply this by the complexity involved in trying to herd all these cats into one backend like healthcare.gov and it was doomed to fail.

[+] Spearchucker|12 years ago|reply
Such projects are doomed to fail because Big.corp, as well as USA.gov doesn't get that they're not using the right people.

High profile project? Want it to work? Hire the right people. Want the right people? Pay whatever it takes.

I've been involved in more such projects than I care to remember, and the problem is always the same. A project manager with rudimentary delivery process knowledge owns a large technology project. What's needed is a technically astute lead that knows how to abstract away from delicate backend dependencies, knows that some projects need big design up front, and knows people that have the specializations he or she doesn't.

.Gov projects unfortunately are turf wars, where people scramble for a piece of the cake because money smells good, and success is someone else's problem

[+] skylan_q|12 years ago|reply
an act which would border on criminal negligence if it was done in the private sector and someone was harmed

For example, someone thought they got coverage for something and they didn't. That mis-communication can lead to actual harm when it comes to medical and financial issues.

[+] fit2rule|12 years ago|reply
>what does that even mean?

It means that if you are writing health-critical software, where lives are involved, and you deliver low quality dreck (as in this case), then you should be held criminally liable.

There is a lot to be said for SIL-4. (http://en.wikipedia.org/wiki/Safety_Integrity_Level) It seems that someone goofed by not giving healthcare.gov a SIL-4 requirement ..

[+] lgieron|12 years ago|reply
Another reason is the fact that these large projects rewrite business processes of the organisation (instead of merely automating them). This obviously opens up a huge turf war on how the post-deployment world is going to look like (esp. if there are to be layoffs). Additionally, in .gov, the business processes are always at least partially defined by law, which in consequence needs to be changed. The actual software project is the easy part.
[+] ams6110|12 years ago|reply
Slow, legacy backend systems are not an intractable problem. You can do things such as copy the data to a faster cache, or you use some kind of queuing system so that queries are processed only as fast as the backend can handle (of course the frontend needs to be able to "check back later" for the results).

This does support the widely held disbelief that this system will be fixed anytime soon. Clearly the management of the project and the design of the architecture are/were fundamentally flawed, and its very unlikely that it can be fixed in 30 days or whatever at this point.

[+] vinhboy|12 years ago|reply
Regardless of how hard it was to make a functioning "data hub", they should have, at the very least, built a functioning registration system that is independent of all other fail points. This way, at the very least, they could have the names and contact information of everyone who wanted to fill out a form. Then they can go back and process the forms at a later date.
[+] hga|12 years ago|reply
Indeed, as you note there are a bunch of fixes possible for slow back ends. A lot of the data is sufficiently static (e.g. from the IRS, what are the numbers for your 2012 taxes, what's your withholding to date for 2013?) that occasional exports to a database inside of "healthcare.gov" might do the trick. In many cases setting up a fake server mimicking the legacy one with such a static data store behind it could get around some of the "can't change the architecture in 7 weeks" issue. Maybe (that deadline is for those who are learning Obamacare outlawed their old policies, 16 million by one estimate for that subset).

We've also read that Experian is doing both identification and income verification, so they're probably not as hopeless.

The management was, as just about everyone is noting, fatally flawed. However it's been reported to have changed, to QSSI becoming the integrator, and the fix-it czar is saying the right reality recognizing things, like the top item on his punch list is to stop sending garbage to the insurers. Presumably the managers still in the chain of command at the White House on down have been convinced to stop making requirements changes....

We'll see.

[+] critium|12 years ago|reply
I've worked in and out of the public sector for the last 10 years and unfortunately, this actually _PAR FOR THE COURSE_.

This is not the contractors fault. Its the government. Before I left to work with a startup, I was abhorred by the lack of ownership on the client's side. Everybody is looking to shuffle responsibility, keep the lowest profile, and do the least amount of work.

It doesnt matter who's writing the code, unless they find somebody competent and passionate on the government side, large projects are destined to fail and better left off to be written by the public sector. This is government waste at its best.

I'm neither republican or democrat but just to add, if my rinky dink app I was working on for the Dept. Of Commerce gets shown to the president when its in 'ALPHA' state, there is no way the most informed person in the world didnt know that the site was going to fail from the get-go.

[+] hga|12 years ago|reply
"unless they find somebody competent and passionate on the government side"

The newly appointed fix-it czar, Jeffrey Zients, definitely sounds competent from his current remarks (see elsewhere in this topic his top priority), hopefully he can muster enough passion for the Maximum Effort required.

And, yeah, I've done some work for the public sector and it's that bad, sometimes worse. In the last case, an entity had Lockheed make an at least half-bespoke (custom) system, which worked pretty well. Then put out continued maintenance on bid, Lockheed didn't win that contract, and years later, as the DEC Alpha systems were nearing their end of life (the line was of course killed by Compaq/HP), it was discovered the sources Lockheed left behind wouldn't compile into the binaries used (e.g. they used SCCS (!!!) ... until a few months before launch). First by a guy they had to let go after a month or so because of budget screwups, then by me over a year later. But of course the plan and budget was predicated on a doesn't match reality estimate of work to be done by another contractor long before.

Needless to say the clients didn't understand the difference between source and binary code, or how they'd painted themselves into a really difficult to exit corner.

As for failing, CMS in its role as integrator did do integration tests 1, maybe 2 weeks before launch. They of course failed hard.

[+] Iftheshoefits|12 years ago|reply
It's the contractors' fault, too. Both parties are playing the same game, here: corporate welfare for the contractors, who usually are owned or operated by people with connections to the government, and long-term job security for the government project management officials who oversee these things.

I don't know if you've ever worked for a contractor, but I have, and I guarantee you the same responsibility shuffling, profile munging, least-amount-of-work attitude exists there. Without it, these contractors wouldn't be able to keep feeding at the trough with the rest of their corporate welfare recipient friends (while they bitch to each other about how evil liberals are, how disgusting entitlements are, etc.).

[+] greenyoda|12 years ago|reply
"There are no easy fixes for the fact that a 30 year old mainframe can not handle thousands of simultaneous queries. And upgrading all the back-end systems is a bigger job than the web site itself. Some of those systems are still there because attempts to upgrade them failed in the past. Too much legacy software, too many other co-reliant systems, etc."

30 year old (1983) mainframes and databases were designed to handle large transaction loads. For example, airline reservation systems and banking systems were built on them.

And upgrading a mainframe (at least an IBM mainframe) to a faster mainframe isn't such a daunting task, since all the code from 30 years ago (or even from the 1960s) is still object-code compatible with the new machines - you can make it run even if you've lost your source code. There's still lots of 30 year old (and older) Cobol code running on mainframes today.

I agree that re-writing the 30 year old software would be hard, but simply getting it to run faster could probably be done just by spending money on the latest mainframes and disk drives. But if nobody ever did a load test on the site, they wouldn't have known that they had to do this. They probably just thought: "Oh, we have to write a web site that talks to a bunch of databases, how hard could that be?" (By the way, they could have written test code to do a load test on those legacy systems without even having a web site running. In retrospect, that's the first thing they should have done, and it would have shown them that their critical path wasn't the user interface.)

[+] GVIrish|12 years ago|reply
In theory you could at least improve the hardware the legacy systems is running on. In practice that may not have been anywhere near enough to ensure success.

1. In this case you would've had to start benchmarking the performance of the legacy systems early, far earlier than when they started development in March. If you determine that hardware upgrades are needed, then you'd need to initiate procurement and upgrade projects at one or more of these other agencies. Projects like that may not necessarily be quick to implement.

Maybe the physical space can't accommodate new hardware. Maybe there isn't enough budget to do an upgrade like that. Maybe there aren't enough personnel resources to plan and implement an upgrade of that scale quickly. Maybe those organizations are just barely keeping their heads above water with the way those systems are currently functioning. Maybe they don't even have a handle on what their hardware configuration is. I know someone who worked at an agency where they had to start unplugging stuff to figure out what server did what.

2. This is all making the assumption that the data in those systems is correct and well-formed, and the business logic in those systems is free of bugs. Maybe you get the database schema and find out that A. It's out of date, B. There's no data dictionary, and C. There's 250k lines of business logic tied up in undocumented triggers. Good luck.

Load testing might just be the tip of the iceberg in situations like this. But bottom line is, if the people leading your project don't even think to start looking into this kind of stuff very early on, you might be screwed before you even started.

[+] dreamfactory|12 years ago|reply
Having integrated with these kind of systems, they aren't up to internet-scale traffic. Apart from raw capacity they are usually architected for transactional consistency, not large indeterminate numbers of concurrent users. This is why we have things like ESB's and async enterprise integration patterns.

I'd be surprised if those weren't already in place on this project but bet it's really poorly done. My money is on a totally manual test process which means a deployment misses loads of cases and takes weeks and where a load test is 10 guys in India hitting f5.

[+] hga|12 years ago|reply
You make very good points about early load testing. Unfortunately, as detailed in so many places including the OP, the integrator, the government's CMS, clearly didn't have the expertise to realize this. E.g. any vaguely competent at software development organization knows you have to freeze the requirements well before the week before the launch, and not make major changes less than 2 months before (the heavy registration process instead of allowing window shopping).
[+] digikata|12 years ago|reply
"Failure isn’t rare for government IT projects – it’s the norm. Over 90% of them fail to deliver on time" Is this really much different than the success rate of startup culture where VC's count themselves successful if 10% of their investments yield a return? The startup environment has the "success rate advantage" that if the venture really isn't getting traction, you can walk away from it, or change directions a do something related, but not your original objective.

Government projects like the healthcare exchange don't have that degree of freedom - if they go down the wrong track, the only choice is put in more resources until it's back on track. Giving up or changing objectives isn't a decision under the control of the project - it's a legislative or budgetary question.

[+] JulianMorrison|12 years ago|reply
The answer is that you can't structure the transaction as a realtime query. You have to structure it as something that's sent and gives you a ticket, and the reply associated with that ticket will come back in its own time.

Stick the processing pipeline in Twitter Storm (which can retry any step until the whole pipeline is done) and structure the requests as nearly-idempotent (so a repeated reply is harmless, and the first arrival associated with the ticket wins). Finally, you have an "inbox" where people can wait for and see their answer, with optional SMS and email notification.

[+] hga|12 years ago|reply
Interesting, but at some point people have to be shown a selection of offerings. That selection must allow seeing if your doctors are in the network.

Your suggestion would allow for that, but not I sense in the "instant gratification" way we're used to dealing with web sites. I.e. "thanks for your input, wait for SMS/email/N hours till you log in again for the next step".

BTW, I've read it's a 30 step process. Not all will require "take a ticket and wait to be called", but more than 1 or 2 I suspect.

[+] bhauer|12 years ago|reply
I have not followed the development of this news closely, but skimming these updates has been amusing. I do have a couple very basic questions. If these are stupid, I apologize ahead of time.

My understanding from previous coverage is that some of the state exchange sites, such as California's, are performing acceptably. If that is true, do those state sites also connect to and query the same legacy systems as the federal site? If so, why doesn't the federal government simply ask for or take that code? Surely it's been made available to them? If not, are the legal requirements for the states' exchanges somehow different than the federal site? That seems unlikely since my understanding is the federal site is simply standing in for states that elected to not create exchange sites. I don't see why it would be subject to extra requirements.

What am I missing here?

[+] taternuts|12 years ago|reply
> Amazingly, none of this was tested until a week or two before the rollout, and the tests failed.

This is absolutely incredible.... two weeks?! Dealing with these legacy systems should have been the absolute first thing tested, is it not the most likely point of failure/bottleneck? Someone on the team had to have been screaming about this and ignored, all the while shitting their pants waiting for go live for the whole thing to crumble.

[+] patja|12 years ago|reply
I'm wondering why states were allowed to build their own systems and opt out of the federal site. From the Washington state site we get passwords emailed in clear text, a failure to even allow people to enter all components of their income (resulting in inflated tax credit decisions), using monthly income figures where annual ones should be used (again, more incorrectly inflated tax credits). In Oregon they say they can't even log in or get through the application. Each of these state-specific sites cost tens of millions, each resulting in their own unique set of defects on launch, to implement a federal program.

The press seems very focused on the obvious availability and performance problems as well as the errors that come up within the sites that prevent someone from completing their application. There are a whole slew of second-order defects that make it appear your application was successful and correct but were based on incorrect calculations, incomplete data, or other bugs that are not obvious to the user at the time they complete the process.

[+] fauigerzigerk|12 years ago|reply
Focusing on the performance or scalability of these ancient backend systems is beside the point. It's simply not a great idea to connect a significant number of backend systems run by different organizations in one synchronous online transaction. The overall probability of failure may simply be too high, irrespective of any scalability issues.
[+] snowwrestler|12 years ago|reply
I think the easiest fix at this point is to simply design around the known delay in synchronizing all the 3rd party data calls.

Have people enter their info, then show them a screen that says "your quote will be emailed to you in 24 hours." Then the integration system has 24 hours to retry any failed data pulls, match up all the data, and generate a quote.

[+] hga|12 years ago|reply
Problems:

An insistence on a heavy (throughtly validated) registration process.

People want to have choices. Maybe they don't want to get insurance from company A. There are also monthly payment vs. deductible tradeoffs in the Bronze vs. Silver etc. plans. And some demand cost sharing after you've used up your deductible (I wouldn't really care if the plan had a 10 million dollar limit if I was expected to pay 20-30% of that...).

ADDED: plus you must be able to see if your doctors are in a plan's network.

Despite the "one size fits all" new minimum gold plated plan, there's still a lot of tradeoffs ... and then email isn't necessarily reliable enough for the response. I can't see them avoiding a "or check back on the site tomorrow..." option.

[+] snorkel|12 years ago|reply
I don't know how much of this is true, but I bet the truth is no less hilarious. It wouldn't surprise me if this system has no concept of usability and offline processing queues. No matter how complex it is to process an application it's common sense to just give the user immediate feedback "Thank you for your order. We'll contact you by email within N days to followup and report your application status." Do these people expect Amazon to process orders in realtime and fling physical goods at their door in minutes? Should buying health coverage be zero conf one click instantaneous?
[+] hga|12 years ago|reply
Well, I've heard there was a "7 second" response time metric that was part of the plan.

But, yes, Amazon's "eventually consistent" system takes its time, e.g. while it's never happened to me, I know that somewhat delayed confirmation email is only made when the whole system has reserved a book for me, even if there's only one copy in stock and someone else made an order for it at about the same time. Etc.

[+] ape4|12 years ago|reply
The frontend assumed the backend was fast enough. That's the problem. If the frontend was made to handle really slow responses from the backend it would look different. It would not make people wait while transactions occurred. Or if might have a page that that displayed your progress: in other to do this for you we need to contact 10 databases - here is the progress of each:

    Database One:   [=======----------]
    Database Two:   [============-----]
    Database Three: [==---------------]
[+] hga|12 years ago|reply
Heh.

But I would half joke that that would result in a bunch of angry people with torches and pitchforks showing up at sites of the owners of Database Two....

Which might be totally unfair if it's not really their fault.

[+] seivan|12 years ago|reply
I don't believe in this anymore

"Everyone outsources large portions of their IT, and they should. It’s called specialization and division of labor. If FedEx’s core competence is not in IT, they should outsource their IT to people who know what they are doing."

These days I believe each department of government that needs an iPhone application would do better to hire an iOS developer full time to maintain and polish the fuck out of it, continually.

[+] mgkimsal|12 years ago|reply
I tend to agree here.

The siren song of 'outsourcing' sounds great, but it's just moving the goal posts, really. The dept/org/staff still need to understand their internal problems well enough to document and communicate them properly, including translating to the outsourcing company.

Hiring IT people internally to be there long term to really understand the agency/dept/org/staff and their problems from an IT perspective should be a requirement for any org. Without a competent person who understands IT and the business needs, and has a longer term investment in the business itself, there's little chance of being able to choose an appropriate outsourcing company to do the job (or indeed to define the job competently in the first place).

[+] malandrew|12 years ago|reply
Strangely, based on the title I thought this was going to be about future startup trends. i.e. For the last 6 years or so we've seen a revolution in interface design as a competitive advantage when creating a new startup, but as the low hanging fruit opportunities are used up, as lot of the really meaty opportunities are going to be in software where there is a significant backend component performing a lot of heavy lifting and magic.

I'm not in the least bit surprised to see that a lot of the work and resulting problems with healthcare.gov are on the backend.

I just wish the government realized that we have all these amazing developers over in the Bay Area that can do a better job than the majority of those developers currently writing software for government contracts. I'm shocked no one in government has said to themselves "What do we have to do to make our software problems accessible to the types of engineers working at the Googles and Dropboxes of the world.

[+] hga|12 years ago|reply
If you've ever done government contracting, you'd know as things stand now a large fraction of those "amazing developers" wouldn't put up with its insanities when they have good alternatives. E.g. would you be happy filling out time sheets for exactly 40 hours, no matter what you did or should do?

I've done this twice. Once for a NASA project where my smaller company was doing work NASA judged as too hard for CSC; that my manager was one of the best ever certainly made a difference, as well as getting any tool I asked for (ah, I was among other things cleaning up after another consultant to this company abjectly failed). Not sure how long I would have done it if that was an option (the NASA consulting contract was eliminated altogether when Clinton downsized NASA).

The other one I've mentioned elsewhere in this discussion, and I quit in disgust after less than 2 months.

[+] GVIrish|12 years ago|reply
Having the best developers in the world is useless if you have incompetent management.

In this case, requirements were delivered extremely late, and they were still in flux up to a week before deployment. The system integrator went forward with only ONE WEEK of integration testing. There were 55 companies involved in this, not to mention all the state exchanges and other federal software systems that had to be integrated.

Better developers would have made a positive impact, but nowhere near enough to make this project a success.

[+] natural219|12 years ago|reply
"Unless it is enjoyable or educational in and of itself, interaction is an essentially negative aspect of information software. There is a net positive benefit if it significantly expands the range of questions the user can ask, or improves the ease of locating answers, but there may be other roads to that benefit. As suggested by the above redesigns of the train timetable, bookstore, and movie listings, many questions can be answered simply through clever, information-rich graphic design. Interaction should be used judiciously and sparingly, only when the environment and history provide insufficient context to construct an acceptable graphic."

"Interaction considered harmful", by Bret Victor http://worrydream.com/MagicInk/

[+] USNetizen|12 years ago|reply
The issue I see here is the author of this article has marginal experience with federal contracting. "The people who wrote the code for these systems are long gone...they are prone to transaction timeouts" ... wrong, wrong and wrong. There are plenty of coders still around maintaining these systems, even with the obscure technologies like MUMPS and all and they are running on rather robust hardware in huge datacenters.

Second of all, the government should NEVER outsource integration - the systems integrator requires an authority to manage other contractors that only the government is capable of holding.

[+] hga|12 years ago|reply
Are you really sure about your first point? I have a state government counterexample here: https://news.ycombinator.com/item?id=6622223

As for the second, whatever might be right, we've been told that only the Pentagon has retained that ability for "medium sized" weapons projects. Anyone know anything to the contrary?

[+] erichocean|12 years ago|reply
As an aside, when I hear about the problems they're having understanding the legacy data formats, it makes me wonder how far you could get with a high-powered, big data NLP system to "parse" the data. Sort of like how Google translate works.

There are rules, after all, they're just not written down. Why not let the computer figure them out, with continuous training from people until the computer's accuracy is high enough?

I suspect instead they tried to write parsers and trusted "the spec", which was never even right the day it was written down. :)

[+] colomon|12 years ago|reply
Can anyone verify this info? I was a bit surprised to see it wasn't better sourced than the comments of a previous Marginal Revolution post. (Nothing against MR, but this seems like huge news if true.)
[+] hga|12 years ago|reply
Well, it's been established all these external data sources are being used.

So its either on-line, maybe with caching (maybe someday), or using a periodically run off off-line copy inside of Healthcare.gov, right?

I'll bet the House hearings made clear which is happening.