I've always found this story fascinating, because it's such a clear example of the role of operator error in safety. Both Daghlian and Slotin were experts in the work they were doing, and both were well aware of the danger of getting it wrong. Still, they made a mistake. As people do.
On one hand, the mistake was the effect of time and schedule pressure. Some of that was real, but some also illusory (as shown by the fact that Los Alamos could stop doing those experiments entirely as still deliver). They chose the approach they did because it was the easiest. But not only that - at least in Slotin's case he chose the approach he did because he didn't believe that he would make a mistake. He'd done this a bunch of times. The danger had become routine, an idea captured in Diane Vaughan's books as "normalization of deviance", and in economic and sports research as "the familiarity heuristic ("I know this, so I'm safe").
On the other hand, the experiment itself set them up to fail. Minor tweaks with near-zero cost impact, like bringing the tamper up from the bottom rather than down from the top, would likely have saved both men. My understanding is (and it's hard to get clear evidence of this) that both men designed their own experiment. Both were in a position to do it in a safer way with no additional cost. With hindsight, both likely would have. In Slotin's case it seems like the commitment heuristic ("I want to be consistent with my past actions") played a role in him doing something he knew to be dangerous "one last time". In Daghlin's case, there seems to have been some role of the scarcity heuristic ("if I don't get this done tonight after the party, I might not get another chance").
This all goes to show how fallible we are. Not only these guys, both extremely smart people. We take irrational risks all the time. The big take away here is that systems need to be safe even though people are going to make bad decisions for bad reasons. One extremely effective way to do that is to separate design from implementation, using our rational decision making processes to make the hard decisions ahead of the moment. Then write the decisions down. Then follow them in the moment.
A more mundane and perhaps relatable version of this is the relaxation of key automobile driving skills once the operator has become comfortable. It is not unusual to see someone performing maneuvers that are only "safe" under assumptions that cannot be strictly true but are statistically likely to be true (e.g. there is nobody coming the other way around this blind corner, there is no cross traffic this late at night, whatever). I doubt people consciously think of it as a dice-roll every time they do it, but it is. Over time you may get comfortable doing things simply because nothing bad has happened but if the assumption fails you will likely crash.
> a clear example of the role of operator error in safety
>The big take away here is that systems need to be safe even though people are going to make bad decisions for bad reasons.
This is why I don't like the phrase "operator error". All too often, it's used to excuse systemic failings and avoid investigating the deeper causes of safety incidents.
As you say, the fascinating part about these accidents is that the risk was informed.
There's a tendency to simply say "If we had more procedures, and they were followed, then this wouldn't have happened." But that seems like a dodge. More often than not (in my experience), more procedure is simply seen as overly-burdensome and actively subverted by users (in the service of laziness, performance, compensation, or deadlines). As you say, willfully making bad decisions for bad reasons.
Sometimes you have the benefit of almost absolute control over people, e.g. SUBSAFE [1] (or Navy or Air Force maintenance QA programs). But most organizations don't have that kind of training budget.
Consequently, the most effective procedure is the safest one that people will actually follow. So strike that balance.
[1] If you can get a hold of high-level documentation, an excellent example of error mitigation in practice -- https://en.wikipedia.org/wiki/SUBSAFE
Slontin was a cowboy (in the derogatory sense of the term) who thought bravado would keep him safe from intense radiation. The demon core incident wasn't an isolated incident, one of his less famous exploits involved swimming in an active nuclear reactor, which obviously irradiated him badly.
> "In the winter of 1945–1946, Slotin shocked some of his colleagues with a bold action. He repaired an instrument six feet under water inside the Clinton Pile while it was operating, rather than wait an extra day for the reactor to be shut down. He did not wear his dosimetry badge, but his dose was estimated to be at least 100 roentgen.[12] A dose of 1 Gy (~100 roentgen) can cause nausea and vomiting in 10% of cases, but is generally survivable.[13]"
Stories like these will forever remind me of Hisashi Ouchi who died horribly from a different kind of criticality incident in Japan in 1999 [0]. He had received 17 sieverts and they tried to keep him alive for as long as possible in a human experminet. I won't post links to articles about this as you can't unsee pictures like that, putting his name in a search engine will get you there.
The incident that caused Slotin's death was discussed a bit in a recent documentary: The Half-Life of Genius Physicist Raemer Schreiber (2017) https://www.imdb.com/title/tt4870510/
This sounds a lot like Fat Man and Little Boy movie. The John Cusack character was killed by radioactive exposure from a mishap at Los Alamos. The movie is really interesting.
Love Alex's archival research work. Never seen a couple of those pictures or heard that particular account of the airglow from Schreiber before. I'm glad to see it finally falling out of favor to paint Slotin as some kind of 'Canadian hero' as it was fashionable to do 15-20 years ago. He was a fool who regularly courted fate, doing things like diving to the bottom of a cooling pool of an operating reactor to repair an instrument he didn't want to wait another day to do, giving himself a nice 100R dose.
[+] [-] mjb|6 years ago|reply
On one hand, the mistake was the effect of time and schedule pressure. Some of that was real, but some also illusory (as shown by the fact that Los Alamos could stop doing those experiments entirely as still deliver). They chose the approach they did because it was the easiest. But not only that - at least in Slotin's case he chose the approach he did because he didn't believe that he would make a mistake. He'd done this a bunch of times. The danger had become routine, an idea captured in Diane Vaughan's books as "normalization of deviance", and in economic and sports research as "the familiarity heuristic ("I know this, so I'm safe").
On the other hand, the experiment itself set them up to fail. Minor tweaks with near-zero cost impact, like bringing the tamper up from the bottom rather than down from the top, would likely have saved both men. My understanding is (and it's hard to get clear evidence of this) that both men designed their own experiment. Both were in a position to do it in a safer way with no additional cost. With hindsight, both likely would have. In Slotin's case it seems like the commitment heuristic ("I want to be consistent with my past actions") played a role in him doing something he knew to be dangerous "one last time". In Daghlin's case, there seems to have been some role of the scarcity heuristic ("if I don't get this done tonight after the party, I might not get another chance").
This all goes to show how fallible we are. Not only these guys, both extremely smart people. We take irrational risks all the time. The big take away here is that systems need to be safe even though people are going to make bad decisions for bad reasons. One extremely effective way to do that is to separate design from implementation, using our rational decision making processes to make the hard decisions ahead of the moment. Then write the decisions down. Then follow them in the moment.
[+] [-] ska|6 years ago|reply
[+] [-] jdietrich|6 years ago|reply
>The big take away here is that systems need to be safe even though people are going to make bad decisions for bad reasons.
This is why I don't like the phrase "operator error". All too often, it's used to excuse systemic failings and avoid investigating the deeper causes of safety incidents.
[+] [-] ethbro|6 years ago|reply
There's a tendency to simply say "If we had more procedures, and they were followed, then this wouldn't have happened." But that seems like a dodge. More often than not (in my experience), more procedure is simply seen as overly-burdensome and actively subverted by users (in the service of laziness, performance, compensation, or deadlines). As you say, willfully making bad decisions for bad reasons.
Sometimes you have the benefit of almost absolute control over people, e.g. SUBSAFE [1] (or Navy or Air Force maintenance QA programs). But most organizations don't have that kind of training budget.
Consequently, the most effective procedure is the safest one that people will actually follow. So strike that balance.
[1] If you can get a hold of high-level documentation, an excellent example of error mitigation in practice -- https://en.wikipedia.org/wiki/SUBSAFE
[+] [-] liability|6 years ago|reply
> "In the winter of 1945–1946, Slotin shocked some of his colleagues with a bold action. He repaired an instrument six feet under water inside the Clinton Pile while it was operating, rather than wait an extra day for the reactor to be shut down. He did not wear his dosimetry badge, but his dose was estimated to be at least 100 roentgen.[12] A dose of 1 Gy (~100 roentgen) can cause nausea and vomiting in 10% of cases, but is generally survivable.[13]"
[+] [-] ethbro|6 years ago|reply
Just because I choose to jump a motorcycle through a flaming hoop doesn't indicate that I think my body is impervious to flame.
It means I weighed the risks and made a choice.
In other words, ignorance, chance, and bravery are different things. And accidents are some mix of all of them.
[+] [-] BurnGpuBurn|6 years ago|reply
[0] https://www.japantimes.co.jp/news/1999/12/22/national/jco-wo...
[+] [-] roryrjb|6 years ago|reply
[+] [-] fnordfnordfnord|6 years ago|reply
It's available on Amazon Prime.
[+] [-] larrydag|6 years ago|reply
https://www.imdb.com/title/tt0097336/?ref_=ttfc_fc_tt
[+] [-] watersb|6 years ago|reply
A sphere of metal, a bit larger than a basketball, it's just the right size and shape to be manipulated by hand tools.
I wonder if that contributed to normalization of deviance. It doesn't look weird enough to kill you.
[+] [-] hmahncke|6 years ago|reply
I'd like to hear a little more about the "bad luck" that caused a bottle of muscatel to get laced with antifreeze...
[+] [-] Abekkus|6 years ago|reply
https://en.wikipedia.org/wiki/1985_diethylene_glycol_wine_sc...
[+] [-] wernsey|6 years ago|reply
https://youtu.be/VE8FnsnWz48
[+] [-] Halluxfboy009|6 years ago|reply