top | item 19857969

Software Engineering Lessons from Aviation

98 points| riceo100 | 6 years ago |riceo.me

55 comments

order

jefffoster|6 years ago

I think there's a lot to learn from the aviation industry. I did a talk at my companies internal conference on this (turned into words at https://medium.com/ingeniouslysimple/why-dont-planes-crash-1...).

For me it's the mindset that differs. Too often as software engineers we find a bug and just fix it. Aviation goes a step deeper and finds the environment that created the bug and stops that.

Unfortunately, the recent 737 MAX incidents seem to have changed this. From what I understand the reaction to the problems sounds more like what I'd expect a software business to do, rather than the airline industry!

ken|6 years ago

There are a handful of highly respected books that everyone knows software engineers should read, like "The Mythical Man-Month" and "Peopleware". Yet whenever I read one of these, I found I learned very little. Everything in them was obvious -- to those who are 'in the trenches'. What we need is a way to get managers to read these, and take them to heart. Even when my manager had a copy of the book sitting on their desk, they rarely had read it, and they absolutely never followed its advice.

(When pushed, they might say "That was a groundbreaking book, for its day, but the industry moved on." Now we've got open floor plans, and AGILE SCRUM, and free snacks ... and also no evidence these are an improvement to the software development process, but never mind.)

This aviation mindset you refer to is the same way. I can't tell you how many times this happened to me:

- User clicks a button, and it doesn't do what it says it should.

- A bug is filed, and assigned to me.

- I investigate, and find the problem. I start preparing a fix.

- Manager comes by to pester me. "Why isn't this button fixed? Shouldn't that have been a quick fix?" We played Planning Poker last week and everybody else who isn't working on it agreed it should only be a 1!

- See, we're computing this value incorrectly, and I grepped the codebase and it turns out we're also doing it wrong in 7 other places, which causes...

- "The customer wants this one button fixed. Don't worry about the others. Don't worry about testing, or cleaning up, or documenting why the mistake was made or how it should have been done. Those aren't on this milestone. Just fix this one button and move on. We need you working on the new features we promised our customers this month..."

Modern software development is a circus of improperly aligned incentives.

0x445442|6 years ago

The blog post was good but it was just another variation highlighting the age old conundrum... fast, good and cheap, pick two.

Those that value quality are going to be swimming up stream in most organization that develop software because the bean counters always go straight to fast and cheap.

qznc|6 years ago

I work in automotive. In Europe there is the ASPICE standard which is actually a reasonable guideline for (commercial) software development (unit tests, code reviews, etc). Customers require you to follow it. Top management requires you to follow it. Projects still ignore it. Writing unit tests at the end of a project misses most of the point, for example.

ellius|6 years ago

After fixing a recent bug, I asked my client company what if any postmortem process they had. I informally noted about 8 factors that had driven the resolution time to ~8 hours from what probably could have been 1 or 2. Some of them were things we had no control over, but a good 4-5 were things in the application team's immediate control or within its orbit.

These are issues that will definitely recur in troubleshooting future bugs, and doing a proper postmortem could easily save 250+ man hours over the course of a year. What's more, fixing some of these issues would also aid in application development. So you're looking at immediate cost savings and improved development speed just by doing a napkin postmortem on a simple bug. I can't imagine how much more efficient an organization with an ingrained and professional postmortem culture would be.

jasode|6 years ago

>as software engineers we find a bug and just fix it. [...] Unfortunately, the recent 737 MAX incidents seem to have changed this.

I think there's some nuance about MCAS that's lost in all the media reports. As far as I understand, the MCAS software didn't have a "bug" in the sense we programmers typically think of. (E.g. Mars Climate Orbiter's software programmed with incorrect units-of-measure.[0])

Instead, the MCAS system was poorly designed because of financial pressure to maintain the fiction of a single 737 type rating.

In other words, the MCAS software actually did what Boeing managers specified it to do:

1) Did the software only read a _1_ AOA sensor with a single-point-of-failure instead of reading _2_ sensors? Yes, because that was what Boeing managers wanted the software to do. It was purposefully designed that way. If the software was changed to reconcile 2 sensors, it would then lead to a new "AOA DISAGREE" indicator[1] which would then raise doubts to the FAA that Boeing could just give pilots a simple iPad training orientation instead of expensive flight-sim training. Essentially, Boeing managers were trying to "hack" the FAA criteria for "single type rating".

2) Did software make adjustments of an aggressive and unsafe 2.5 degrees instead of a more gentle and recoverable 0.6 degrees? Yes, because Boeing designed it that way.

Somebody at Boeing specified the software design to be "1 sensor and 2.5 degrees" and apparently, that's what the programmers wrote.

I know we can play with semantics of "bug" vs "design" because they overlap but to me this seems to be a clear case of faulty "design". The distinction between design vs bug is important to let us fix the root cause.

The 737 MAX MCAS software issue isn't like the Mars Climate Orbiter or Therac-25 software bugs. The lessons from MCO and Therac-25 can't be applied to Boeing's MCAS because that unwanted behavior happens in a layer above the programming:

- MCO & Therac: design specifications are correct; software programming was incorrect

- Boeing 737MAX MCAS: design specifications incorrect; software programming was "correct" -- insofar as it matched the (flawed) design specifications

[0] https://en.wikipedia.org/wiki/Mars_Climate_Orbiter#Cause_of_...

[1] yellow "AOA Disagree" text at the bottom of display: https://www.ainonline.com/sites/default/files/styles/ain30_f...

jayd16|6 years ago

Retrospectives are a common part of agile. Only slightly less common is skipping retrospectives.

sn|6 years ago

Checklists and written procedures are very important. One of the earlier things I did when coming into my company was create a written procedure for software upgrades until we had time to automate it with ansible.

One thing I have not had very good discipline about is I want to use checklists both for code submitted for review and when I'm doing reviews. Lint checkers etc. can only go so far.

If anyone has published checklists for code reviews I'd be curious to see them. This one seems reasonable: https://www.liberty.edu/media/1414/%5B6401%5Dcode_review_che... though I'd add concurrency to the list.

cjbprime|6 years ago

This was great!

> 1. Don’t kill yourself

> 2. Don’t kill anyone else

Could we reorder these, though? Every once in a while a plane will hit a house and kill its occupants (and the pilot, usually) and it's so awful. I think not killing others as a pilot is so much more important than not killing yourself.

X6S1x6Okd1st|6 years ago

That ordering reminds me of the first rule of search and rescue: don't create another victim.

If your job is to save a life and that life depends on you, you don't do anyone any favors if you die

maxxxxx|6 years ago

I think the idea is that if the pilot dies then most likely passengers or others will die too. Someone needs to control the plane.

myl|6 years ago

"...plenty of episodes of Mayday/Air Crash Investigation available on Youtube too. (Be warned though, all doomed flights take off from one of the busiest airports in the world .)" Great show. Comment is spot on, and don't forget "investigators were under extreme pressure".

billfruit|6 years ago

Though article isn't about software development in the aviation industry, a few thoughts on that:

The industry is really slow to change its practices and tools. Like the use of C for most software, I do feel a more safer language out to be preferred.

Use of 1553 bus for inter device communication, the bus and protocol aren't general, it is very opinionated/rigid about the manner in which communication should happen. And the hardware parts for it are horrendously expensive compared to most ethernet, IP equipment. There is an aviation ethernet standard, but adoption of it has been slow.

magduf|6 years ago

>The industry is really slow to change its practices and tools. Like the use of C for most software, I do feel a more safer language out to be preferred.

And what language would that be, where it has absolute determinism (which rules out anything with GC)?

They tried using Ada years ago for avionics. The problem here is that no one knows Ada any more, and no one really wants to make a career out of it since it isn't used anywhere else.

So, in practice, C and (a narrow subset of) C++ get used. Maybe Rust would be a good choice in the future.

HeyLaughingBoy|6 years ago

it is very opinionated/rigid about the manner in which communication should happen

This could be a strong factor in its popularity. If things must happen in a certain order, then the behavior of the system becomes easier to verify. Ease of verification should never be understated in safety-critical systems.

starpilot|6 years ago

Not killing yourself and a checklist (like we learned in driver's ed but apply informally at best) also apply to driving a car.

bdamm|6 years ago

Uh, no. In a car, if things go badly you pull off the road and work on a solution. If things to really badly, you have seatbelts, airbags, crumple zones, and a thick frame to help you out.

In an airplane, if things to badly, you keep flying until you land. If things go really badly, remember that everything is built to be light weight, and unless the crash is well controlled, everything will be destroyed and everyone will die. If your engine quits, your cabin ruptures, your instrumentation fails, you keep flying. And you need instruments; in poor visibility, your own sensory inputs are in fact faulty, and won't help you figure out which way is down.

Unlike in a car, where it's pretty obvious where the ground is, for example.

marcosdumay|6 years ago

I still hold my opinion that checklists are for hardware issues. One should not be filling them on software tasks. Instead, software is automated, automatically tested and automatically verified - routine checks are an anti-feature and inversely correlated to quality.

skookumchuck|6 years ago

The article talks about pilot procedures, not engineering procedures specific to aviation.

horacio_colbert|6 years ago

Thinking of aviation makes me remember the impact of doing things right.