The obsession with C/C++ here is really weird. Like, take the MCO failure. That's a classic, textbook problem that can be structurally guaranteed not to happen with use of even a basic type system. It should be literally impossible to confuse values of different types/units/dimensions like this in something described as "safety-critical".
It seems like all the resources here are concerned with trying to whittle C/C++ into an appropriate choice of tool, rather than choosing a different tool. It seems like a 1980s-1990s mindset.
As I understood the MCO failure, a better type system wouldn't have helped, as the issue was that another program expected metric input while the first program outputted US units.
Units can help verify that a formula within a program is correct (velocity v = 0.5(metric_acceleration)9.8 t;cout<<v.to_metric();) won't compile, for example.
But it won't help with
Program 1:
velocity_imperial v = 0.5(metric_acceleration)9.8 t* t;cout<<v;
How would a basic type system protect against incorrectly interpreting an imperial floating point value as a metric floating point value? That seems like an especially weak example, and fundamentally falls under the realm of logical fault endemic of every possible programming language.
There are legitimate gripes about C/C++, especially in a space with hostile actors an unknown inputs, but that example was particularly weak.
Anecdotally I personally have talked to several people in the last few years who do things like write guidance systems for rockets. My limited sample frequently either worked in C or (a limited subset of) C++. Albeit with a variety of tooling on top to automatically test and catch a variety of kinds of common flaws.
So as weird as this may seem to you, that mindset is applicable much more recently than you might expect.
I'm halfway through this, and not only is the theory insightful and often unexpected, but it's incredibly engaging, incredibly so for such an academic work.
Other than the latest MISRA, I really enjoyed "Better Embedded System Software" by Phil Koopman.
Ideally you should read it before starting your project, since it deal with the product specification/gathering requirements phase, which is your starting point in safety critical systems.
Does anyone know how software quality is handled in complex supply chains, e.g. automotive? From my point of view software is a 2nd grade citizen in areas dominated by manufacturing and classical engineering.
I guess testing an over-the-update for a car that was build by ann OEM and thousands of suppliers must be quite a task.
It's getting better, but hardware companies tend to view software as second-class. They think it's "easy", though they're finally accepting that it's not. It's taken decades of fatalities, cost overruns, and missed deadlines for them to realize this, but they're realizing it.
Typically the software has to be developed according to some ISO standard like https://en.wikipedia.org/wiki/ISO_26262 and the supplier has to have some proof like from the UL or the German TÜV that they followed the procedures.
Code like the Boeing 787's avionics package gets one better: the spec specifies what the register values should be after each step of execution, and there's a company which takes the code, puts the processor in single-step mode, and checks.
> the JSF project has been reported to have lots of software defects
I haven't read anything that differentiates between these two possible scenarios:
1) Poor engineering, execution, etc.
2) The bugs expected in this software project. When I think of it this way, I'm amazed it ever was completed (but maybe I'm thinking about it the wrong way):
* Meet the specifications of not only three U.S. military services but also militaries and other entities in multiple national governments (with all the politics, compromise and complexity that involves).
* Invent and implement technologies to provide capabilities so bleeding edge that few people will imagine some of them for years, if not decades. There are no prior designs; nothing like it has ever been done. Part of the point is to exceed competitors' engineering capabilities by as much as possible.
* Integrate these technologies into a massive system of systems, arguably the most complex system in the history of humankind.
* The system is human-rated.
* Performance is the highest priority; there is no making easy compromises of performance for safety: Human lives, the outcomes of battles, the fates of nations, and the course of history may depend on performance.
* Accomplish this in secret, greatly restricting your access to outside resources. Will this work? You can't publish a paper and get feedback, or make a presentation at a conference.
* Accomplish this in coordination with thousands of suppliers in many countries.
* Because it's hardware and very expensive, your ability to iterate is limited. My completely amateur guess based on the above is that it's a massive, decades-long waterfall-style project.
c++ is not considered safe for any RTOS system, in fact you won't find it used in Aviation embedded devices (referring to the big 3 )
Tools yes, you can higher level languages to your heart's content.
Huh? The F-22A, F-35, P-8, and P-3 are all flying C++ code. Those are just the programs I have personally touched (not necessarily the code, though). Where did you get the idea that it "is not considered safe for any [real-time] system"?
[+] [-] ctz|9 years ago|reply
It seems like all the resources here are concerned with trying to whittle C/C++ into an appropriate choice of tool, rather than choosing a different tool. It seems like a 1980s-1990s mindset.
[+] [-] greenhouse_gas|9 years ago|reply
Units can help verify that a formula within a program is correct (velocity v = 0.5(metric_acceleration)9.8 t;cout<<v.to_metric();) won't compile, for example.
But it won't help with Program 1:
velocity_imperial v = 0.5(metric_acceleration)9.8 t* t;cout<<v;
Program2:
velocity_metric v; cin>>v; BurnFor(doSomeRocketScienceToCalculateEngineBurn(v))
[+] [-] Qworg|9 years ago|reply
[+] [-] endorphone|9 years ago|reply
There are legitimate gripes about C/C++, especially in a space with hostile actors an unknown inputs, but that example was particularly weak.
[+] [-] btilly|9 years ago|reply
So as weird as this may seem to you, that mindset is applicable much more recently than you might expect.
[+] [-] AlexDenisov|9 years ago|reply
1. There was a spacecraft (MCO) and a module that was sending some data from the Earth.
2. The module was delivered late when MCO was on its way for 4 (!) months already before that staff manually calculated the needed data.
3. Some teams switched into "defensive mode" not willing to communicate and fixing the problem when it was clear.
[+] [-] banachtarski|9 years ago|reply
[+] [-] Jtsummers|9 years ago|reply
https://mitpress.mit.edu/books/engineering-safer-world
This list is barely scratching the surface of safety-critical system engineering, but it's a start.
[+] [-] jonahx|9 years ago|reply
[+] [-] kqr2|9 years ago|reply
http://sunnyday.mit.edu/safer-world.pdf
[+] [-] stanislaw|9 years ago|reply
[+] [-] swah|9 years ago|reply
Ideally you should read it before starting your project, since it deal with the product specification/gathering requirements phase, which is your starting point in safety critical systems.
[1] https://betterembsw.blogspot.com.br/2010/05/test-post.html
[+] [-] phelmig|9 years ago|reply
I guess testing an over-the-update for a car that was build by ann OEM and thousands of suppliers must be quite a task.
[+] [-] Jtsummers|9 years ago|reply
[+] [-] adrianN|9 years ago|reply
[+] [-] GoToRO|9 years ago|reply
Also people don't realize, but by using a linter you basically don't write the code in C, but in "safe C". It's like a different language.
[+] [-] yeslibertarian|9 years ago|reply
[+] [-] kevinr|9 years ago|reply
[+] [-] danaliv|9 years ago|reply
[+] [-] mrlyc|9 years ago|reply
[+] [-] RaiO|9 years ago|reply
[+] [-] jacquesm|9 years ago|reply
http://erlang.org/download/armstrong_thesis_2003.pdf
[+] [-] macintux|9 years ago|reply
http://www.hpl.hp.com/techreports/tandem/TR-85.7.pdf
[+] [-] partycoder|9 years ago|reply
However, the JSF project has been reported to have lots of software defects.
[+] [-] hackuser|9 years ago|reply
I haven't read anything that differentiates between these two possible scenarios:
1) Poor engineering, execution, etc.
2) The bugs expected in this software project. When I think of it this way, I'm amazed it ever was completed (but maybe I'm thinking about it the wrong way):
* Meet the specifications of not only three U.S. military services but also militaries and other entities in multiple national governments (with all the politics, compromise and complexity that involves).
* Invent and implement technologies to provide capabilities so bleeding edge that few people will imagine some of them for years, if not decades. There are no prior designs; nothing like it has ever been done. Part of the point is to exceed competitors' engineering capabilities by as much as possible.
* Integrate these technologies into a massive system of systems, arguably the most complex system in the history of humankind.
* The system is human-rated.
* Performance is the highest priority; there is no making easy compromises of performance for safety: Human lives, the outcomes of battles, the fates of nations, and the course of history may depend on performance.
* Accomplish this in secret, greatly restricting your access to outside resources. Will this work? You can't publish a paper and get feedback, or make a presentation at a conference.
* Accomplish this in coordination with thousands of suppliers in many countries.
* Because it's hardware and very expensive, your ability to iterate is limited. My completely amateur guess based on the above is that it's a massive, decades-long waterfall-style project.
[+] [-] vonmoltke|9 years ago|reply
[+] [-] watwut|9 years ago|reply
[+] [-] throwme_1980|9 years ago|reply
[+] [-] vonmoltke|9 years ago|reply
[+] [-] planteen|9 years ago|reply
http://www.militaryaerospace.com/articles/2013/10/software-c...