This is beautiful and humbling. An example of a scientist who loved being wrong was Fred Hoyle. Quote: ‘it is better to be interesting and wrong than boring and right’.
Talkig of Russell, I can't imagine how he and Whitehead felt when the huge edifice of the Principia Mathemetica simply crumbled away under the onslaught of the incompleteness theorem.
Yet it is hard to believe scientists would be the ones who only ship flawless software. And easier to imagine that most often, when one realises a previous publication's result was indeed impacted by an error and faces the difficult decision to retract or pretend they haven't noticed, they opt for the other route. I've found scientists in general to have less ego then average, as expected from people trained to care only for the facts, but they still operate in the larger framework of individualistic modern society.
It probably takes a scientist and a woman, whose education generally encourage lesser ego to begin with, to admit such an error.
Or a software author, but there there is no other alternative to guilt admittance than total ridicule, since we are drowning everyday in a world of errors.
> Yet it is hard to believe scientists would be the ones who only ship flawless software.
I mean, most of them don't even pin down their dependencies. I don't want to touch a lot of python code in the open written by scientists.
Thinking about software reproducibility is the last thing I have noticed on repositories submitted with a paper on arxiv and other places of publishing.
Not even a requirements.txt or mentioning what they used.
I'm sure you don't really mean that premodern societies were more selfless and humble, but that's how you writing can be interpreted. Family name, fateralism and class based society puts even stronger incentives to skew the the truth than individualism.
The other side of this is that actual science happened. Admitting error is science being done right. The search for the truth is fundamental and detecting when it wasn't found is contributing to the greater truth.
Questions around methodology abound but at its core this is science walking tall. Not hobbling along loaded down with sugar-coated lies about to collapse into a coma. That this is seen as an exception or extraordinary is quite illuminating in itself.
I really hope those grad students that she mentioned didn't get a couple years added on to their degrees as a result of this. I mean good on her for finding the error, but I can't imagine what it would be like to be told "cancel your thesis, I made a mistake and your work is now invalid."
Yeah, I was disheartened that she didn’t mention what the fallout for them was. The one that was about to defend was probably the most affected out of anyone in this story. The author ended up keeping her grant and got tenure. What happened to the students?
Luckily, they were masters' students, so the damage will be limited to a semester or so at most (in the US, generally a master's thesis isn't the most important part of a degree). If it was a PhD thesis... I don't want to think about that.
It's great that she did the right thing and had the paper retracted, but this is still terrible on so many levels.
Maybe with a strong effect for every single subject a little more skepticism would have been warranted in the first place? Some manual spot checking if possible, or using a minimal independent implementation of the analysis code?
Who knows if she'd gotten her grant, her assistant professorship without the publication of this incorrect finding. Who knows who didn't get any of that because they were a bit too careful in their work.
> Who knows if she'd gotten her grant, her assistant professorship without the publication of this incorrect finding. Who knows who didn't get any of that because they were a bit too careful in their work.
On the other hand, if she hadn't wasted her time on this useless study she might have done more useful studies and her career would be better than it is right now. She might have gotten even better grants if she hadn't made this error, and maybe fewer other people would have gotten grants. Maybe those other people being more careful helped them rather than hurt them.
I don't see how speculating like this is very useful.
> Maybe with a strong effect for every single subject a little more skepticism would have been warranted in the first place?
It is easy to think over these lines in hindsight[1], but it is much harder to do it when there are no known mistakes. Obviously they had some plausible hypothesis, which predicted and explained results. The more strong result is, the better for hypothesis.
I mean it was possible and maybe wise to suspect bug because results are too good. But it is hard from a cognitive standpoint. She describes bug as hard to find even after she found that results do not reproduce. It was even more hard to find this bug when there were less reasons to believe that there is a bug.
> It's great that she did the right thing and had the paper retracted
I'm sorry but you seem to have skipped reading the main part of the article. The paper was not retracted:
"The editor and publisher were understanding and ultimately opted not to retract the paper but to instead publish a revised version of the article, linked to from the original paper, with the results section updated to reflect the true (opposite) results."
I live in the city struck by the earthquake and I can tell you that what the article states is not correct. The scientists went under trial not for a wrong prediction but for downplaying the possibile risk after thousands of smaller earthquakes were recorded in the area.
While they may be scientists by training, predicting earthquakes is an application and not science itself.
Like don't be wrong about that. Similar to how you should not be wrong about planes falling out of the sky, bridges collapsing, or medicines killing all the patients.
I wonder to what extent this has effected their public policy around healthcare? Were doctors afraid of warning about severity for fears if they got it wrong and their advice was used to close schools/businesses.
we still trust software too much. I suspect that most software-controlled experiments are going to have errors like this! we should require that every experiment has at least two clean-room implementations of its logic, and a battery of smoke tests for common mistakes
The most shocking part of this article was the revelation that the original article was not retracted. The entire conclusion and data were entirely wrong, yet the paper still stands. That’s really damning for the journal at least and likely speaks to a pretty bad culture in the wider field.
Kudos to the author for doing the right thing, but the fact that there seems to be no way to remove a paper that is blatantly false because retractions are reserved for deliberate misconduct is horrifying. Not only does this setup long term fucked up incentives (no downside to fraud if you paint it as a whoopsie), but it also harms all work that had cited that work and anyone doing literature reviews not realizing the ground other papers were standing on has dissolved away.
>The editor and publisher were understanding and ultimately opted not to retract the paper but to instead publish a revised version of the article, linked to from the original paper, with the results section updated to reflect the true (opposite) results.
The paper authors made a mistake, fine. But the scientific process and peer review process should have caught it. It didn't. The author caught it accidentally and then luckily decided to come forward (bravo!). This begs the question of robustness of the whole scientific publishing process. I hope they adopt the practice of doing a blameless RCA and improve the scientific and academic peer review process.
>This begs the question of robustness of the whole scientific publishing process
It raises the question. Begging the question is an unrelated logical fallacy. Unfortunately, there have been a ton of examples of the peer-review process being essentially useless. Things like people deliberately putting things in to test if the paper is even being read and none of the reviewers notice it.
Well done by the author and thanks for sharing, especilly because of the immense mental pressure. I think it is great that the journal was cooporative for this. It would be interesting to see a single journal implement a very easy undo-button for peer reviewed research and see how that reflects over a series of years compared to the current model.
Although very ridgid, the scientiffic ecosystem is very robust, and we now see friction being removed with efficient pre-print servers.
I remember some early covid papers were redrawn from bioRxiv by authors request.
What a nightmare. I bet this raises the hair on the back of the necks of a lot of other researchers. So much time and momentum can be invested into a research track like this.
Is this a nightmare? I read a story of someone who had a very reasonable, strong emotional response to the error, but ultimately got credit for coming clean and republished their results with new data. (And a different conclusion.)
This is exactly how I'd expect something like this to work -- the author isn't a bad person because they made an error. The co-authors aren't bad people because they failed to catch it. Software and science are hard, mistakes are going to happen.
If anything, I think the researchers learned valuable lessons, and are better researchers as a result. They have an anecdote they can share with more junior researchers about this frightening thing that happened to them, and use that to grow more people.
We should celebrate people who take the time to handle their mistakes properly and share the lessons openly.
That gives you a web page where you can give the full URL to the paywalled one, and it'll give back an "https://archive.xx/address" URL where you can see the full thing.
I use DuckDuckGo's iOS browser. If you use it's Clear Data functionality, most sites will display the article (because they think you're a first-time visitor).
Now if only politicians learned a thing or two from this guy.
Not possible however: scientists talk to brains, politicians talk to bellies and appeal to instincts, which require them to appear strong by showing self confidence and never admit any errors.
I'm surprised if updating papers to correct mistakes is not the common path. Especially with software being a crucial part of it. I mean, how many software products do we know of that shipped flawlessly in v1.0?
So this error was in the code that actually ran the experiment, not the analysis. The experiment was effectively doomed from the beginning. The mistake amounts to failing to calibrate and test your experimental apparatus, which is a little hard to forgive, since it would have taken hardly any time or effort. Perhaps I’m being harsh but scientific enquiry is too important to be satisfied with conducting unrigorous experiments then apologising, which of course one could do ad infinitum.
[+] [-] sn41|6 years ago|reply
https://sites.tufts.edu/histmath/files/2015/11/Frege-Letter-...
Russell himself says "As I think about acts of integrity and grace, there is nothing to compare with Frege's dedication for truth."
[+] [-] Daub|6 years ago|reply
[+] [-] Angostura|6 years ago|reply
[+] [-] rixed|6 years ago|reply
Yet it is hard to believe scientists would be the ones who only ship flawless software. And easier to imagine that most often, when one realises a previous publication's result was indeed impacted by an error and faces the difficult decision to retract or pretend they haven't noticed, they opt for the other route. I've found scientists in general to have less ego then average, as expected from people trained to care only for the facts, but they still operate in the larger framework of individualistic modern society. It probably takes a scientist and a woman, whose education generally encourage lesser ego to begin with, to admit such an error.
Or a software author, but there there is no other alternative to guilt admittance than total ridicule, since we are drowning everyday in a world of errors.
[+] [-] searchableguy|6 years ago|reply
I mean, most of them don't even pin down their dependencies. I don't want to touch a lot of python code in the open written by scientists.
Thinking about software reproducibility is the last thing I have noticed on repositories submitted with a paper on arxiv and other places of publishing.
Not even a requirements.txt or mentioning what they used.
[+] [-] nabla9|6 years ago|reply
I'm sure you don't really mean that premodern societies were more selfless and humble, but that's how you writing can be interpreted. Family name, fateralism and class based society puts even stronger incentives to skew the the truth than individualism.
[+] [-] denzil_correa|6 years ago|reply
The job of a scientist is really not to ship software, that's what a team of engineers would do.
[+] [-] tehlike|6 years ago|reply
There is no shame in making mistakes, and being honest about it should take us further as a society.
[+] [-] rs23296008n1|6 years ago|reply
Questions around methodology abound but at its core this is science walking tall. Not hobbling along loaded down with sugar-coated lies about to collapse into a coma. That this is seen as an exception or extraordinary is quite illuminating in itself.
[+] [-] Nalta|6 years ago|reply
[+] [-] irrational|6 years ago|reply
[+] [-] bpodgursky|6 years ago|reply
[+] [-] throwaway285524|6 years ago|reply
Maybe with a strong effect for every single subject a little more skepticism would have been warranted in the first place? Some manual spot checking if possible, or using a minimal independent implementation of the analysis code?
Who knows if she'd gotten her grant, her assistant professorship without the publication of this incorrect finding. Who knows who didn't get any of that because they were a bit too careful in their work.
[+] [-] Thorrez|6 years ago|reply
On the other hand, if she hadn't wasted her time on this useless study she might have done more useful studies and her career would be better than it is right now. She might have gotten even better grants if she hadn't made this error, and maybe fewer other people would have gotten grants. Maybe those other people being more careful helped them rather than hurt them.
I don't see how speculating like this is very useful.
[+] [-] ordu|6 years ago|reply
It is easy to think over these lines in hindsight[1], but it is much harder to do it when there are no known mistakes. Obviously they had some plausible hypothesis, which predicted and explained results. The more strong result is, the better for hypothesis.
I mean it was possible and maybe wise to suspect bug because results are too good. But it is hard from a cognitive standpoint. She describes bug as hard to find even after she found that results do not reproduce. It was even more hard to find this bug when there were less reasons to believe that there is a bug.
[1] https://en.wikipedia.org/wiki/Hindsight_bias
[+] [-] brabel|6 years ago|reply
I'm sorry but you seem to have skipped reading the main part of the article. The paper was not retracted:
"The editor and publisher were understanding and ultimately opted not to retract the paper but to instead publish a revised version of the article, linked to from the original paper, with the results section updated to reflect the true (opposite) results."
[+] [-] peterlk|6 years ago|reply
I know this isn't really what the article is about, but scientists are allowed to be wrong unless and until politics is involved.
[0] https://www.scientificamerican.com/article/italian-scientist...
[+] [-] Fragoel2|6 years ago|reply
[+] [-] im3w1l|5 years ago|reply
Like don't be wrong about that. Similar to how you should not be wrong about planes falling out of the sky, bridges collapsing, or medicines killing all the patients.
[+] [-] rs23296008n1|6 years ago|reply
[+] [-] jes5199|6 years ago|reply
[+] [-] robertlagrant|6 years ago|reply
[+] [-] neonate|6 years ago|reply
[+] [-] kortilla|6 years ago|reply
Kudos to the author for doing the right thing, but the fact that there seems to be no way to remove a paper that is blatantly false because retractions are reserved for deliberate misconduct is horrifying. Not only does this setup long term fucked up incentives (no downside to fraud if you paint it as a whoopsie), but it also harms all work that had cited that work and anyone doing literature reviews not realizing the ground other papers were standing on has dissolved away.
[+] [-] dwighttk|6 years ago|reply
I don't see what the problem is.
[+] [-] vinay_ys|6 years ago|reply
https://osf.io/b94yx/
The paper authors made a mistake, fine. But the scientific process and peer review process should have caught it. It didn't. The author caught it accidentally and then luckily decided to come forward (bravo!). This begs the question of robustness of the whole scientific publishing process. I hope they adopt the practice of doing a blameless RCA and improve the scientific and academic peer review process.
[+] [-] asdkjh345fd|6 years ago|reply
It raises the question. Begging the question is an unrelated logical fallacy. Unfortunately, there have been a ton of examples of the peer-review process being essentially useless. Things like people deliberately putting things in to test if the paper is even being read and none of the reviewers notice it.
[+] [-] jbj|6 years ago|reply
[+] [-] lexpar|6 years ago|reply
[+] [-] Pfhreak|6 years ago|reply
This is exactly how I'd expect something like this to work -- the author isn't a bad person because they made an error. The co-authors aren't bad people because they failed to catch it. Software and science are hard, mistakes are going to happen.
If anything, I think the researchers learned valuable lessons, and are better researchers as a result. They have an anecdote they can share with more junior researchers about this frightening thing that happened to them, and use that to grow more people.
We should celebrate people who take the time to handle their mistakes properly and share the lessons openly.
[+] [-] dschuetz|6 years ago|reply
[+] [-] unexaminedlife|6 years ago|reply
[+] [-] jzer0cool|6 years ago|reply
[+] [-] justinclift|6 years ago|reply
That gives you a web page where you can give the full URL to the paywalled one, and it'll give back an "https://archive.xx/address" URL where you can see the full thing.
eg: https://archive.vn/0ikco in this instance
[+] [-] SynasterBeiter|6 years ago|reply
[+] [-] DonCopal|6 years ago|reply
[+] [-] notriddle|6 years ago|reply
[+] [-] squarefoot|6 years ago|reply
[+] [-] nednar|6 years ago|reply
[+] [-] Gatsky|6 years ago|reply
[+] [-] GiveOver|6 years ago|reply
[+] [-] seemslegit|6 years ago|reply
Ideally, that would have been an uh-oh moment.