I've been in academic research for 22 years. Based on my experience, I'll go out on a limb and say that even single-blind reviews are to be discarded.
This is leading to quite irresponsible reviews. Instead, authored and credited reviews might lead to more responsible reviews, or reviewers respectfully declining when they might not know the topic.
Instead, in CS, there is a tendency to hide behind an abrasive negative review when the reality is that the reviewer does not understand the paper. Programme Committees are relieved to find a negative review, however unfair or off-kilter it is, because more rejected papers will decrease the acceptance ratio of the conference, hence make it appear more competitive.
Double-blind reviews are just peer-review theater. It is quite simple to guess which group the paper is from. It is difficult to guess the exact set of authors, but reviewers who are out to settle a score or to discard dismissively just need to know a ballpark of where the paper is from in order to stonewall with an irascible review.
CS publication culture is a large part of the problem. The average CS paper is small (in terms of person-years put into the project), so people submit many papers. Most papers are submitted to conferences first, which usually accept or reject after a single round of reviews. When the typical outcome is rejection and resubmission to another conference, the negative aspects of peer review become prominent.
Over the years, I've drifted to the biological side of bioinformatics methods. The papers I write are similar to CS papers, but the projects are bigger, and I no longer submit papers to CS conferences. My typical experience with peer review is accept after 1-3 rounds of revisions. Hence I get to see more of the positive side, where the reviewers act as editors and their suggestions improve the paper.
Of course, it's possible to get a CS-like experience with journals by being ambitious. If you try your luck by submitting the paper to prestigious journals it's unlikely to get in, your typical outcome is a desk rejection or rejection after a single round of reviews. Then you get to see more of the negative side of peer review. But I'm just not interested in playing that game.
Agree. I used to think double blind was an obvious requirement for a good peer review system. Your “theatre” take is aptly harsh I’d say.
Also hard agree on publishing reviews. It seems to be the only viable modification of peer review that we have.
My latest experience with peer review was just bad. No matter the conclusion they drew, there was just a clear lack of quality and understanding in their thoughts and effort. And this is not infrequent.
More generally, such a modification has knock on effects that probably warrant some thought on what we do with our publication practices. Published reviews will probably require more work from reviewers who are already time poor. Better reviews, hopefully, but fewer of them and fewer papers?
Counting a good review as a citable publication might be worth considering, as essentially mini idiosyncratic literature reviews. Perhaps in combination with Registered Reports[1], where the proposal is published and reviewed before the study/work is done, in which case the reviewer is closer to symbiotic co-publisher.
Either way, it seems that for both authors and reviewers, who are effectively the same people and both rely on peer-review as a guarantee of the value of their work, that the actual work of peer review needs to be taken more seriously and not conceived of as some sort of feudal aristocratic gentlemen’s duty.
> Double-blind reviews are just peer-review theater. It is quite simple to guess which group the paper is from. It is difficult to guess the exact set of authors, but reviewers who are out to settle a score or to discard dismissively just need to know a ballpark of where the paper is from in order to stonewall with an irascible review.
In some fields like AI knowing the exact authors is also quite common. Because the vast majority of researchers are employed by just a handful of big labs, the reviewers (and sometimes organizers) with experience in the subfield are employed by exactly the same lab(s). So they already know each other and each other's research anyway. Add to that that citations and prior work usually give away the set of authors too.
> This is leading to quite irresponsible reviews. Instead, authored and credited reviews might lead to more responsible reviews, or reviewers respectfully declining when they might not know the topic.
That misses the whole point of anonymous reviews, or even anonymous voting.
You want your work to stand by itself. Otherwise you're blasting open a very corrupt door where appeals to authority, petty politics, careerism, and funding play a role in how a paper is approved.
> Instead, in CS, there is a tendency to hide behind an abrasive negative review when the reality is that the reviewer does not understand the paper.
Isn't the whole point of a paper to present a topic in a clear and understandable way, so that your peers are able to cut through the bullshit?
If you pick a journal to publish your paper, which means you explicitly want the paper's editors to go through each and every single line of text you wrote to poke holes, but in the end once those holes are picked you complain that the journal you picked is not the right one for your flawless paper and that they all suck and their problem is that they don't understand your genius, what does this say about you and your work?
When you pick a journal you pick the subset of your peers to review your work. If your peers point out problems then why not listen to them?
Your comment reads a whole lot like "the fox and the grapes".
> Instead, in CS, there is a tendency to hide behind an abrasive negative review when the reality is that the reviewer does not understand the paper.
Never heard this argument, but having been under many double blind peer reviews in the last 10 years, I can sign this.
Interestingly, in my last paper for PLOS One, one of two reviewers chose to lift anonymity and specifically his comments were on-focus, supportive and substantial.
The one experience I had with peer review before leaving academia was actually pretty nice. One of the reviewers was a bit of a hardass, but they made good points and I think made the paper much better.
As a consumer of papers now, I find peer review pretty useful. I can’t read everything that comes out. Even if I just read abstracts, I need some sort of filter. Publication in a top tier journal raises the probability that egregious errors haven’t been made and that the paper is worthy of my time.
Of course, in practice, this has the problem that it is already hard to get reviewers, because nobody has time anymore. Removing the anonymity of reviewers leads to the judge becoming open to judgement themselves, and many would not like that. Furthermore, there might be severe repercussions if the reviewed is more powerful than the reviewer, especially in authoritarian societies. But maybe one should just exclude these societies from peer review, and let them do their own thing.
I think peer review has to become a market place. Let everyone choose for themselves which paper they would like to review, and there is both positive and negative credit for both reviewers and authors. The pool of reviewers of a paper shares 25% of the credit that the authors of the paper get. This way you are incentivised to review even outlier papers, because if they are successful you get a substantial amount of credit for them. How the credit is distributed among the reviewers should also incorporate a time factor, so that reviewers rushing to a paper that is already successful don't get nearly as much credit as a lone early reviewer.
I have no serious involvement in academia. What is the proportion of work that gets reviewed by someone with an axe to grind? I don’t doubt that it happens regularly but I have no conception of the magnitude of the problem.
> Instead, in CS, there is a tendency to hide behind an abrasive negative review when the reality is that the reviewer does not understand the paper.
This problem goes even deeper, because it not only applies to papers but also to research grant applications. I even had once got a (positive) recommendation for one of my applications, but I could see from the statement that the reviewer did not really understand what was important.
All of the process creates incredible gatekeeping effects. Science would be much more robust if published github-style: post your results, engage with the community, fix issues, respond to critics, and eventually wrap it up, but always have the priority date of your original work recorded so that there's no dispute as to who invented it first.
Double-blind peer review sounds nice in theory, but I don't think it would hold up in practice, especially in specialized domains.
In those domains, everyone knows everyone else. Even if you hid the name, looking at the writing style, the subject, the choice of analysis software, the type of study. etc would largely give away who the authors are. For example I bet you could figure out that Richard Stallman wrote a particular paper, even if I removed his name from it. Or on HN, how many times have you read a comment and gone, "that sounds like..." even without fully noticing who posted it.
The field(s) I'm in use double blind and it's broken for a host of other reasons. It's really hard to build off a string of work for example. Students (eg the ones writing papers) are often bad at the anonymization requirements. On the other side, the reviewers can often form an opinion on likely authors (whether correct or not). And then up one level (at the editorial / program committee level) that anonymization goes away since they need to check for reviewer conflicts, etc.
I think ML, at least when I was involved in it, had a much more sane way of approaching publication than other fields (I came from a biology background).
Open science addresses a lot of the pain points that are mentioned in the article. I think we can all agree that having certain magazines that include what can be considered the best works is a good thing; people for better or worse need these instruments to seamlessly judge the base quality of an article. Regardless of my personal criticisms and potential declining quality of a particular magazine it is obviously very helpful to be able to know "hey, at least I can expect some degree of quality given that it was published in this or another magazine" specially as a student.
But this does not mean we ought to gate-keep research. Not only that, but having your publication out can open up a lot of feedback that can be incorporated and addressed. And this is where I go back to my original statement, in ML, it was very common to publish your work to arxiv for people to read through it and I think that greatly improved the speed upon which the field developed. Access is very important.
So I'm all for abolishing "pre-publication peer review".
This vastly underestimates the esoteric nature, experiential knowledge and complexity required to review most research published in the scientific community. The common public cannot be expected to provide non-populist capable review of pharmaceutical level research
As someone who is inside the academic publication loop, I am in favor of double-blind, non-public peer reviews because I don't want to invite 4Chan into my daily work.
I believe that double-blind, non-public reviews do a great job at protecting those in a more precarious situation: grad students are not judged by their lack of publication record, there is no permanent public record of all those times they got rejected, and reviewers can be both more honest and more certain that a rejection won't lead to a 4Chan/Twitter mob coming for them.
According to the ACL's 2019 survey [1], "female respondents were less likely to support public review than male respondents". I'd be weary of implementing any changed that would make academia even more hostile to women and minorities.
I just don't see how double-blind peer review can work. To properly peer review a manuscript you often have to read the papers cited (because often details of the methods of the current manuscript are described in previously published work). These citations are almost always to the authors' own work. So it is trivial for any peer reviewer to figure out who wrote the manuscript they are reviewing, even if their names are blinded.
> I am in favor of double-blind, non-public peer reviews because I don't want to invite 4Chan into my daily work.
At least for mathematics papers, the community who works in a particular area, and is thus actually able to do a review the paper, is pretty small. Since, on the other hand, the style that you use for mathematical proofs is like a fingerprint, double-blind peer reviews are next to impossible.
> I have a partial solution: researchers “publish” papers to arXiv or similar, then “submit” them to the journal, which conducts peer review. The “journal” is a list of links to papers that it has accepted or verified.
This made me curious about how arxiv operates. It seems that you require endorsement to become a registered author, and the submissions are moderated[1][2].
This already seems sufficient to keep out spam and clearly junk science. What is the value add of official(?) journals?
"Official" (aka well-known/have a high "Impact Factor") journals give you a stamp of approval that help you get recognised in your career, and in some fields are where your peers go to stay up to date on new potentially relevant research.
(Disclosure: I volunteer for https://plaudit.pub, a non-profit that aims to separate that from the publication process.)
Submissions are moderated, but depending on the field, anything reasonable-looking may be accepted. For example, there is more than one "proof" of the Riemann Hypothesis in the arXiv. But the moderation system certainly keeps out spam.
Edit: Here's a proof of the Riemann hypothesis submitted just last week: https://arxiv.org/pdf/2209.01890.pdf You can have a little fun exploring "proofs" of famous theorems on the arXiv over the years.
There's lots of science that can't be reproduced before publishing or where reproducing doesn't help:
* Theoretical work
* Computer simulations (re-run the simulation? Well that's not going to detect most issues. Re-create the program from scratch? expensive and hard to make a re-write meaningfully different systematically)
* Cosmological observations (can't re-run that supernova!)
* Medical case studies (can't just find a new patient)
* Large scale studies (build a second LHC? Do results based on the Framingham Study have to kick off another 30 years of data collection before they publish)
In fact it's only small and self contained experiments where this is really practical.
The standard isn't peer review, it is unusual. https://xkcd.com/882/ applies to science and will get you publishes that green jelly beans cause cancer with no need to mention all experiments that got expected results.
Which is why reproducibility should be the most important topic.
Having been on both sides of the peer review process in my field (Theoretical Physics), I say we should completely abolish it. Just publish the papers on the preprint archives. Some will be bad, some will be good, and you will know the impact of a paper after 10 years, nothing different from how it is now. Currently the very big majority of the effort in the reviews (on both sides of the process) is wasted time.
While knowing who the authors are can cause bias, hiding behind anonymization (as a reviewer) can as well. I think publishing the reviews (and who reviews) would help a lot (some fields do this). Science is really a dialog. The authors are putting forth part of the dialog and the reviewers are reflecting on it and giving their opinions. Hopefully publishing names next to reviews would minimize crappy reviews (often by the most senior folks). However the other problem is there are not enough qualified reviewers to go around and I could see many opting out of putting their name next to a review. But, we're likely better off.
There also often isn't a clear cut line between "good" and "bad". Almost all papers have flaws and putting those out in the open for others to improve upon, or at least acknowledge, would help move humanities knowledge forward.
having worked in academic publishing, i've concluded that peer review would perform best if it were embedded in the very act of reading (so post-publication, non-formalized kind of peer review) instead of having it be a formalized process riddled with issues and cracks as it is currently.
it would enable faster and more seamless communication, academics wouldn't be burdened with extra volunteer work, and publishers would still be able to curate works as the authors suggest (+ there are other ways to do it).
sadly my impression is that it's simply too entrenched in the publishing process -- and publisher's raison d'etre to a large extent -- for the publishers to relinquish control of it
I like the basic principle of peer review, but not how it's done. I understand why it was done that way for so long, but we have the technology to do better now.
I read an interview with a TV show runner who said you can come up with any crazy plot twist thing, and within 10 minutes of the first episode airing some guy on the internet has already figured the whole thing out. I think this phenomenon could be put to good use.
Just publish your papers publicly with a comments section. If there's problems with it people will tear it apart. Source their work and let the world help you improve your work.
The way Journal of Open Source (JOSS) - https://joss.theoj.org - does the reviewing process is good. Everything is out in the open, reviewers and authors names, reviews, responses, discussions etc. The whole process is out there for everyone to see.
[+] [-] sn41|3 years ago|reply
This is leading to quite irresponsible reviews. Instead, authored and credited reviews might lead to more responsible reviews, or reviewers respectfully declining when they might not know the topic.
Instead, in CS, there is a tendency to hide behind an abrasive negative review when the reality is that the reviewer does not understand the paper. Programme Committees are relieved to find a negative review, however unfair or off-kilter it is, because more rejected papers will decrease the acceptance ratio of the conference, hence make it appear more competitive.
Double-blind reviews are just peer-review theater. It is quite simple to guess which group the paper is from. It is difficult to guess the exact set of authors, but reviewers who are out to settle a score or to discard dismissively just need to know a ballpark of where the paper is from in order to stonewall with an irascible review.
[+] [-] jltsiren|3 years ago|reply
Over the years, I've drifted to the biological side of bioinformatics methods. The papers I write are similar to CS papers, but the projects are bigger, and I no longer submit papers to CS conferences. My typical experience with peer review is accept after 1-3 rounds of revisions. Hence I get to see more of the positive side, where the reviewers act as editors and their suggestions improve the paper.
Of course, it's possible to get a CS-like experience with journals by being ambitious. If you try your luck by submitting the paper to prestigious journals it's unlikely to get in, your typical outcome is a desk rejection or rejection after a single round of reviews. Then you get to see more of the negative side of peer review. But I'm just not interested in playing that game.
[+] [-] maegul|3 years ago|reply
Also hard agree on publishing reviews. It seems to be the only viable modification of peer review that we have.
My latest experience with peer review was just bad. No matter the conclusion they drew, there was just a clear lack of quality and understanding in their thoughts and effort. And this is not infrequent.
More generally, such a modification has knock on effects that probably warrant some thought on what we do with our publication practices. Published reviews will probably require more work from reviewers who are already time poor. Better reviews, hopefully, but fewer of them and fewer papers?
Counting a good review as a citable publication might be worth considering, as essentially mini idiosyncratic literature reviews. Perhaps in combination with Registered Reports[1], where the proposal is published and reviewed before the study/work is done, in which case the reviewer is closer to symbiotic co-publisher.
Either way, it seems that for both authors and reviewers, who are effectively the same people and both rely on peer-review as a guarantee of the value of their work, that the actual work of peer review needs to be taken more seriously and not conceived of as some sort of feudal aristocratic gentlemen’s duty.
~~~~~~~~
[1] https://www.cos.io/initiatives/registered-reports
[+] [-] mudrockbestgirl|3 years ago|reply
In some fields like AI knowing the exact authors is also quite common. Because the vast majority of researchers are employed by just a handful of big labs, the reviewers (and sometimes organizers) with experience in the subfield are employed by exactly the same lab(s). So they already know each other and each other's research anyway. Add to that that citations and prior work usually give away the set of authors too.
[+] [-] arinlen|3 years ago|reply
That misses the whole point of anonymous reviews, or even anonymous voting.
You want your work to stand by itself. Otherwise you're blasting open a very corrupt door where appeals to authority, petty politics, careerism, and funding play a role in how a paper is approved.
> Instead, in CS, there is a tendency to hide behind an abrasive negative review when the reality is that the reviewer does not understand the paper.
Isn't the whole point of a paper to present a topic in a clear and understandable way, so that your peers are able to cut through the bullshit?
If you pick a journal to publish your paper, which means you explicitly want the paper's editors to go through each and every single line of text you wrote to poke holes, but in the end once those holes are picked you complain that the journal you picked is not the right one for your flawless paper and that they all suck and their problem is that they don't understand your genius, what does this say about you and your work?
When you pick a journal you pick the subset of your peers to review your work. If your peers point out problems then why not listen to them?
Your comment reads a whole lot like "the fox and the grapes".
https://en.wikipedia.org/wiki/The_Fox_and_the_Grapes
[+] [-] LudwigNagasena|3 years ago|reply
The open discussions not only benefit the peer review process but also act as context and learning material for others.
[+] [-] Helmut10001|3 years ago|reply
Never heard this argument, but having been under many double blind peer reviews in the last 10 years, I can sign this.
Interestingly, in my last paper for PLOS One, one of two reviewers chose to lift anonymity and specifically his comments were on-focus, supportive and substantial.
[+] [-] Gimpei|3 years ago|reply
As a consumer of papers now, I find peer review pretty useful. I can’t read everything that comes out. Even if I just read abstracts, I need some sort of filter. Publication in a top tier journal raises the probability that egregious errors haven’t been made and that the paper is worthy of my time.
[+] [-] auggierose|3 years ago|reply
Of course, in practice, this has the problem that it is already hard to get reviewers, because nobody has time anymore. Removing the anonymity of reviewers leads to the judge becoming open to judgement themselves, and many would not like that. Furthermore, there might be severe repercussions if the reviewed is more powerful than the reviewer, especially in authoritarian societies. But maybe one should just exclude these societies from peer review, and let them do their own thing.
I think peer review has to become a market place. Let everyone choose for themselves which paper they would like to review, and there is both positive and negative credit for both reviewers and authors. The pool of reviewers of a paper shares 25% of the credit that the authors of the paper get. This way you are incentivised to review even outlier papers, because if they are successful you get a substantial amount of credit for them. How the credit is distributed among the reviewers should also incorporate a time factor, so that reviewers rushing to a paper that is already successful don't get nearly as much credit as a lone early reviewer.
[+] [-] cirgue|3 years ago|reply
[+] [-] Archelaos|3 years ago|reply
This problem goes even deeper, because it not only applies to papers but also to research grant applications. I even had once got a (positive) recommendation for one of my applications, but I could see from the statement that the reviewer did not really understand what was important.
[+] [-] spoonjim|3 years ago|reply
[+] [-] hooby|3 years ago|reply
.) an unsolvable problem inherent to all peer reviews?
.) a problem that comes from specific practices, and could be solved by doing peer reviews differently?
I'm just a layman - but if reviews are not done by peers (= experts working in the same field)... who else could properly review a paper?
[+] [-] Qem|3 years ago|reply
[+] [-] RcouF1uZ4gsC|3 years ago|reply
In those domains, everyone knows everyone else. Even if you hid the name, looking at the writing style, the subject, the choice of analysis software, the type of study. etc would largely give away who the authors are. For example I bet you could figure out that Richard Stallman wrote a particular paper, even if I removed his name from it. Or on HN, how many times have you read a comment and gone, "that sounds like..." even without fully noticing who posted it.
[+] [-] kentlyons|3 years ago|reply
[+] [-] jseliger|3 years ago|reply
[+] [-] Vinnl|3 years ago|reply
[+] [-] spoonjim|3 years ago|reply
[+] [-] LetThereBeLight|3 years ago|reply
[+] [-] jerojero|3 years ago|reply
Open science addresses a lot of the pain points that are mentioned in the article. I think we can all agree that having certain magazines that include what can be considered the best works is a good thing; people for better or worse need these instruments to seamlessly judge the base quality of an article. Regardless of my personal criticisms and potential declining quality of a particular magazine it is obviously very helpful to be able to know "hey, at least I can expect some degree of quality given that it was published in this or another magazine" specially as a student.
But this does not mean we ought to gate-keep research. Not only that, but having your publication out can open up a lot of feedback that can be incorporated and addressed. And this is where I go back to my original statement, in ML, it was very common to publish your work to arxiv for people to read through it and I think that greatly improved the speed upon which the field developed. Access is very important.
So I'm all for abolishing "pre-publication peer review".
[+] [-] hackernewds|3 years ago|reply
[+] [-] probably_wrong|3 years ago|reply
I believe that double-blind, non-public reviews do a great job at protecting those in a more precarious situation: grad students are not judged by their lack of publication record, there is no permanent public record of all those times they got rejected, and reviewers can be both more honest and more certain that a rejection won't lead to a 4Chan/Twitter mob coming for them.
According to the ACL's 2019 survey [1], "female respondents were less likely to support public review than male respondents". I'd be weary of implementing any changed that would make academia even more hostile to women and minorities.
[1] http://acl2019pcblog.fileli.unipi.it/wp-content/uploads/2019...
[+] [-] jhbadger|3 years ago|reply
[+] [-] q-big|3 years ago|reply
At least for mathematics papers, the community who works in a particular area, and is thus actually able to do a review the paper, is pretty small. Since, on the other hand, the style that you use for mathematical proofs is like a fingerprint, double-blind peer reviews are next to impossible.
[+] [-] LudwigNagasena|3 years ago|reply
[+] [-] dahfizz|3 years ago|reply
This made me curious about how arxiv operates. It seems that you require endorsement to become a registered author, and the submissions are moderated[1][2].
This already seems sufficient to keep out spam and clearly junk science. What is the value add of official(?) journals?
[1] https://arxiv.org/about
[2] https://arxiv.org/help/submit
[+] [-] Vinnl|3 years ago|reply
(Disclosure: I volunteer for https://plaudit.pub, a non-profit that aims to separate that from the publication process.)
[+] [-] raegis|3 years ago|reply
Edit: Here's a proof of the Riemann hypothesis submitted just last week: https://arxiv.org/pdf/2209.01890.pdf You can have a little fun exploring "proofs" of famous theorems on the arXiv over the years.
[+] [-] remram|3 years ago|reply
[+] [-] rgrieselhuber|3 years ago|reply
[+] [-] advisedwang|3 years ago|reply
* Theoretical work * Computer simulations (re-run the simulation? Well that's not going to detect most issues. Re-create the program from scratch? expensive and hard to make a re-write meaningfully different systematically) * Cosmological observations (can't re-run that supernova!) * Medical case studies (can't just find a new patient) * Large scale studies (build a second LHC? Do results based on the Framingham Study have to kick off another 30 years of data collection before they publish)
In fact it's only small and self contained experiments where this is really practical.
[+] [-] snowwrestler|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] bluGill|3 years ago|reply
Which is why reproducibility should be the most important topic.
[+] [-] ck2|3 years ago|reply
You will be amazed what you didn't spot/assumed and learn a lot.
[+] [-] EEBio|3 years ago|reply
[+] [-] micheles|3 years ago|reply
[+] [-] luispauloml|3 years ago|reply
[+] [-] kentlyons|3 years ago|reply
There also often isn't a clear cut line between "good" and "bad". Almost all papers have flaws and putting those out in the open for others to improve upon, or at least acknowledge, would help move humanities knowledge forward.
[+] [-] marsa|3 years ago|reply
it would enable faster and more seamless communication, academics wouldn't be burdened with extra volunteer work, and publishers would still be able to curate works as the authors suggest (+ there are other ways to do it).
sadly my impression is that it's simply too entrenched in the publishing process -- and publisher's raison d'etre to a large extent -- for the publishers to relinquish control of it
[+] [-] betwixthewires|3 years ago|reply
I read an interview with a TV show runner who said you can come up with any crazy plot twist thing, and within 10 minutes of the first episode airing some guy on the internet has already figured the whole thing out. I think this phenomenon could be put to good use.
Just publish your papers publicly with a comments section. If there's problems with it people will tear it apart. Source their work and let the world help you improve your work.
[+] [-] meowtastic|3 years ago|reply
[+] [-] scientism|3 years ago|reply
[+] [-] sylware|3 years ago|reply
[+] [-] Stevenjobss|3 years ago|reply
[deleted]