I saw this headline and my first thought was that someone was claiming that a mind impacting virus that evolved in the ocean was causing scientists to do research with less ambition. Which is of course ridiculous lol. But a bug in a visualization library impacting science is also ridiculous.
If I understand the abstract correctly (which I very well might not be), I don't think it is saying a bug caused problems across all of science, but that it resulted in an incorrect conclusion in one meta study of disruptiveness in science.
Given that the majority of scientists seem to be cat owners, and toxoplasmosis has been linked to mental illness, it's not entirely implausible that a (human) bug is slowing scientific advancement.
Science communication must be at an all-time low. I initially thought the paper was about a sea-borne pathogen being responsible for a decline in disruptiveness in science, which is a crazy statement.
Then I thought that it was a paper claiming that a bug in the seaborn plotting library in python was responsible for the decline in disruptiveness in science, which is absurd!
Finally I understood, that this is a paper that is debunking another meta paper that claimed that disruptiveness in science had declined. And this new, arxiv paper is showing that a bug in the seaborn plotting library is responsible for the mistake in the analysis that led to that widely publicized conclusion about declining disruptiveness in science. oh boy so many levels...
Neither the paper title nor the abstract leads with “Seaborn.” The decision to start the submission with “Seaborn bug…” is purely an HN artifact, and nothing to do with science communication.
ETA: For those who don’t click through, the paper title is “Dataset Artefacts are the Hidden Drivers of the Declining Disruptiveness in Science.” The first few sentences of the abstract are:
“Park et al. [1] reported a decline in the disruptiveness of scientific and technological knowledge over time. Their main finding is based on the computation of CD indices, a measure of disruption in citation networks [2], across almost 45 million papers and 3.9 million patents. Due to a factual plotting mistake, database entries with zero references were omitted in the CD index distributions, hiding a large number of outliers with a maximum CD index of one, while keeping them in the analysis [1].”
The seaborn issue linked in the paper, “Treat binwidth as approximate to avoid dropping outermost datapoints” (https://github.com/mwaskom/seaborn/pull/3489), summarizes the problem as follows:
> floating point errors could cause the largest datapoint(s) to be silently dropped
However, the paper does not contain the string “float”, instead saying only:
> A bug in the seaborn 0.11.2 plotting software [3], used by Park et al. [1], silently drops the largest data points in the histograms.
So at the very least, the paper is silent on a key aspect of the bug.
Seaborn is a visualization library. No statistical tests should have been done with seaborn as an intermediate processing step. I guess they used some of the convenience functions as part of the data analysis. Seaborn is a final step tool, not a data analysis tool. That's an embarrassing lesson to learn post-publication.
Take a look at the linked chart in my other comment. Visualization is absolutely a driver during research, it isn’t just an embarrassing revelation. Charts killed the Challenger crew.
The submission was flagged, and I am not sure I understand why since the only (negatively) critical discussion I see is on the ambiguity over the title in the HN submission; flagging a submission appears to take it off the HN homepage, and I feel a title ambiguity in the face of the significance of the submission itself isn’t a strong reason for removing the submission from HN? :)
There are (at the time of posting this comment) no comments raising any substantive issue with the arxiv submission itself (which ofc has to go through the peer review process of publication, and hopefully the original authors will respond / rebut this new article) - so curious why its been flagged? It’s not dead, so cannot vouch for it.
If folks in the HN community who have flagged it have done so because there are serious issues with what the paper is asserting, please comment / critique instead of just flagging it. If it’s because of the ambiguity in the title, I hope @dang and the moderators editorialize - there are some valuable comments in this thread that helped me understand what the issue is and what the bug is!
Gonna preface by saying I like what matplotlib is trying to do, and that it has done a lot of good for a lot of people.
Seaborn is a wrapper around matplotlib. It's popular because it removes a lot of the boilerplate from matplotlib and is pandas-aware
For example, you call the pairplot function with a dataframe, and you just get a matrix of correlation plots and histograms. Versus matplotlib where half the documentation/search results use imperative w/ global state and the other half is OOP, and all the extra subplots shenanigans you have to decipher to get something that looks good.
It's convenience, really. The people who use seaborn don't want to dive into matplotlib because the interface is kinda a mess with multiple incompatible ways to do things. It also documents what arguments mean instead of hiding most of them in **kwargs soup. You get plots in 1 minute of seaborn that would otherwise take 10 minutes in matplotlib to write.
Bizarre. How do people make such big, splashy findings that can mess with people’s sense of optimism about science and innovation, without doing the simplest types of checks on their data and methodology.
No, the question is: how did peer review not catch it? I have the impression that reviewers don't have the time or incentive to give papers more that a cursory review. Independent of this case, a great many papers are published where the only "proof" is a user study or survey with an extremely low number of participants, but it still gets published. Many papers don't publish their datasets and don't contain enough detail to try and replicate their results.
There should be a real incentive/compensation for reviewing properly and real consequences if a paper gets retracted for reasons that should have been caught in review.
In this case it's fortunate that it did get found out in the end.
Trash comment.
1st. Splashy often comes from the media, not the scientists
2nd. One of the ways we discover problems with data is by plotting. When the plot library has a bug that hides a problem, well shit.
3rd. They did check their own findings multiple ways. Mistakes happen. The biggest critics of scientific mistakes are often those that have never done science themselves. Its easy, and its a cheap play.
What were you expecting? I read that as "a bug in the Seaborn graphing library caused wrong conclusions" and don't understand what other interpretations there are.
Of course, it has nothing to do with rampant fraud, unreproducible results, incentive structures which reward the number of papers over the quality of papers, having researchers spend their prime scientific years writing grant proposals instead of actual research...
...nor does it have anything to do with tech companies hoarding cash by the trillions of dollars oversees instead of spending it on R&D, and even what R&D they internally produce they have no incentive to publish or productize, because virtually no new business will be more profitable than the monopoly business they already have...
ironSkillet|2 years ago
thayne|2 years ago
ossicones|2 years ago
xeckr|2 years ago
RobotToaster|2 years ago
tempodox|2 years ago
https://www.imdb.com/title/tt4877736
raziel2701|2 years ago
Then I thought that it was a paper claiming that a bug in the seaborn plotting library in python was responsible for the decline in disruptiveness in science, which is absurd!
Finally I understood, that this is a paper that is debunking another meta paper that claimed that disruptiveness in science had declined. And this new, arxiv paper is showing that a bug in the seaborn plotting library is responsible for the mistake in the analysis that led to that widely publicized conclusion about declining disruptiveness in science. oh boy so many levels...
matthewdgreen|2 years ago
ETA: For those who don’t click through, the paper title is “Dataset Artefacts are the Hidden Drivers of the Declining Disruptiveness in Science.” The first few sentences of the abstract are:
“Park et al. [1] reported a decline in the disruptiveness of scientific and technological knowledge over time. Their main finding is based on the computation of CD indices, a measure of disruption in citation networks [2], across almost 45 million papers and 3.9 million patents. Due to a factual plotting mistake, database entries with zero references were omitted in the CD index distributions, hiding a large number of outliers with a maximum CD index of one, while keeping them in the analysis [1].”
mglz|2 years ago
It's arxiv, not a press release. :)
bumbledraven|2 years ago
> floating point errors could cause the largest datapoint(s) to be silently dropped
However, the paper does not contain the string “float”, instead saying only:
> A bug in the seaborn 0.11.2 plotting software [3], used by Park et al. [1], silently drops the largest data points in the histograms.
So at the very least, the paper is silent on a key aspect of the bug.
daveguy|2 years ago
sitkack|2 years ago
Aloisius|2 years ago
The bug in Seaborn simply meant that the histograms that could have alerted them that something was wrong with their analysis, didn't.
light_hue_1|2 years ago
And I hope the original authors tell Nature to retract their paper. It's already highly influential unfortunately.
sitkack|2 years ago
On mobile and can’t read the rest of the paper, the impact could be massive.
moh_maya|2 years ago
There are (at the time of posting this comment) no comments raising any substantive issue with the arxiv submission itself (which ofc has to go through the peer review process of publication, and hopefully the original authors will respond / rebut this new article) - so curious why its been flagged? It’s not dead, so cannot vouch for it.
If folks in the HN community who have flagged it have done so because there are serious issues with what the paper is asserting, please comment / critique instead of just flagging it. If it’s because of the ambiguity in the title, I hope @dang and the moderators editorialize - there are some valuable comments in this thread that helped me understand what the issue is and what the bug is!
math_dandy|2 years ago
StableAlkyne|2 years ago
Seaborn is a wrapper around matplotlib. It's popular because it removes a lot of the boilerplate from matplotlib and is pandas-aware
For example, you call the pairplot function with a dataframe, and you just get a matrix of correlation plots and histograms. Versus matplotlib where half the documentation/search results use imperative w/ global state and the other half is OOP, and all the extra subplots shenanigans you have to decipher to get something that looks good.
It's convenience, really. The people who use seaborn don't want to dive into matplotlib because the interface is kinda a mess with multiple incompatible ways to do things. It also documents what arguments mean instead of hiding most of them in **kwargs soup. You get plots in 1 minute of seaborn that would otherwise take 10 minutes in matplotlib to write.
keenmaster|2 years ago
janosdebugs|2 years ago
There should be a real incentive/compensation for reviewing properly and real consequences if a paper gets retracted for reasons that should have been caught in review.
In this case it's fortunate that it did get found out in the end.
masklinn|2 years ago
I have definitely done that with benchmarks / profiles.
It’s probably even easier when the incentives encourage “the find”.
SubiculumCode|2 years ago
2nd. One of the ways we discover problems with data is by plotting. When the plot library has a bug that hides a problem, well shit.
3rd. They did check their own findings multiple ways. Mistakes happen. The biggest critics of scientific mistakes are often those that have never done science themselves. Its easy, and its a cheap play.
brookst|2 years ago
dkasper|2 years ago
bmitc|2 years ago
KRAKRISMOTT|2 years ago
morkalork|2 years ago
ayhanfuat|2 years ago
sergers|2 years ago
Like others, expecting a wildy different article...
stavros|2 years ago
rhelz|2 years ago
...nor does it have anything to do with tech companies hoarding cash by the trillions of dollars oversees instead of spending it on R&D, and even what R&D they internally produce they have no incentive to publish or productize, because virtually no new business will be more profitable than the monopoly business they already have...
asplake|2 years ago
Edit: Not mentioned in the abstract but it is in the main paper. Editorialised title.
CoastalCoder|2 years ago
JW_00000|2 years ago
unknown|2 years ago
[deleted]
unknown|2 years ago
[deleted]
shawnz|2 years ago
> A bug in the seaborn 0.11.2 plotting software [3], used by Park et al. [1], silently drops the largest data points in the histograms.