(no title)
OldHand2018 | 4 years ago
Also, if you have the computer checking everything, then those 5 people that are supposed to be redundantly computing the navigation plan are highly likely to be less diligent. Human nature and all. Isn't that likely to result in a worse outcome?
kbenson|4 years ago
I'm not sure I've even seen a situation in practice where an additional safety check made the situation worse. Those same people that shirk their duties and half-ass their job under the assumption the computer will just find the problems generally make a plethora of other mistakes if a computer isn't there to double check.
Computer verification of work, usually done by applying rules and heuristics, is useful and when done well, and roughly analogous to an additional human checker IMO. If policies and expectations are set right, it's a better outcome.
This may or may not follow for the initial calculation being done by computer and then checked by a human. Some of the competitiveness of people to make sure they do the job well and don't need fixes from a computer/human checker go right out the window and perhaps that does lead to complacency.
soneil|4 years ago
This is surprisingly easy to fix. If the computer notices before the human, call that a failure. Say, if the computer spots terrain higher than the current depth within X radius (that wasn't intentionally planned for), that's a failure.
I assume the military already has a regime in place to handle "you dun goofed". You can string failsafes after goofed but before the wall.
toast0|4 years ago
If they half-ass, but follow the computer fixes, maybe nobody knows they were half-assing. If they half-ass and other people fix it, their half-assing is known and remediations are available.
zentiggr|4 years ago
At least, that's what it looked like when I saw it in '98. It doesn't sound like it's gotten any better.
post_from_work|4 years ago
It's 10x worse now. Now there are half-a-dozen different browser-based apps that all do the same thing, and that's BEFORE you mention the AI data fusion initiatives people want to integrate.
riskneutral|4 years ago
GekkePrutser|4 years ago
Well the lack of situational awareness with EFIS (glass cockpit) systems is a known issue. Pilots tend to 'switch off' because of the low workload and then when something goes wrong they're not aware of the situation because they haven't been following along.
This has contributed so some incidents such as the AF 447 crash where the pilots were basically unaware of the actual situation of the plane and flew it straight into the ground (well, sea).
I can understand the Navy wants its crews to be more involved for that reason (and perhaps also because the enemy might deliberately instill confusion by messing with navaids etc), but I think there should be at least a warning if you try to do something that's known to be stupid.
jwithington|4 years ago
You're probably on to something. When radar was first rolled out to all Navy ships to assist navigation (post WWII), accident rates actually increased.
My hunch is that Sailors drove ships riskier thinking that radar would save them. A bit like the findings of seat-belt safety laws: no impact on fatalities.
But I'm not agitating for full-blown computer reviews. It just feels like the software should have the computing capabilities of Excel lol
limbicsystem|4 years ago
Jtsummers|4 years ago
Discipline works until time pressure causes discipline to relax, and then the loosened discipline becomes the norm (normalization of deviance). There is no reliable way to restore discipline (in a timely fashion) after that happens, and then a collision would become inevitable. If you only rely on discipline, you're bound to fail. If you have means of relieving the reliance on discipline and don't use them, you're making a tragic mistake.
OldHand2018|4 years ago