(no title)
IKantRead | 2 years ago
There's an article I came across awhile back, that I can't easily find now, that basically mapped out the history of our current peer review system. Peer review as we know it today was largely born in the 70s and a response to several funding crises in academia. Peer review was a strategy to make research appear more credible.
The most damning critique of peer-review of course is that it completely failed to stop (and arguably aided) the reproducibility crisis. We have an academic system where the prime motivation is the secure funding through the image of credibility, which from first principles is a recipe for wide spread fraud.
hnfong|2 years ago
Academic careers are then decided by the Github activity charts.
MichaelZuo|2 years ago
But is there an alternative that still allows most academic aspirants to participate?
ribosometronome|2 years ago
It's worth pointing out that most of everything happened before peer review was dominant. Given how many advances we've made in the past 50 years, so I'm not super sure everyone would agree with your statement. If they did, they'd probably also agree that most of the worst science also happened before peer review was dominant, too, though.
jovial_cavalier|2 years ago
ikesau|2 years ago
https://www.experimental-history.com/p/the-rise-and-fall-of-... https://www.experimental-history.com/p/the-dance-of-the-nake...
nsagent|2 years ago
smcin|2 years ago
- accessing and verifying the datasets (in some tamper-proof mechanism that has an audit trail). Ditto the code. This would have detected the Francesca Gino and Dan Ariely alleged frauds, and many others. It's much easier in domains like behavioral psychology where the dataset size is spreadsheets << 1Mb instead of Gb or Tb.
- picking a selective sample of papers to check reproducibility on; you can't verify all submissions, but you sure could verify most accepted papers, also the top-1000 most cited new papers each year in each field, etc. This would prevent the worst excesses.
PS a superb overview video [0] by Pete Judo "6 Ways Scientists Fake Their Data" (p-hacking, data peeking, variable manipulation, hypothesis-shopping and selectively choosing the sample, selective reporting, also questionable outlier treatment). Based on article [1]. Also as Judo frequently remarks, there should be much more formal incentive for publishing replication studies and negative results.
[0]: https://www.youtube.com/watch?v=6uqDhQxhmDg
[1]: "Statisics by Jim: What is P Hacking: Methods & Best Practices" https://statisticsbyjim.com/hypothesis-testing/p-hacking/
HarHarVeryFunny|2 years ago
fatherzine|2 years ago
pas|2 years ago
mostly it should try to do it through falsifying things, of course groupthing is seldom effective at that.
ska|2 years ago
This seems unlikely to be true, simply given the growth. If you are arguing that the SNR ratio was better, that's different.
fl7305|2 years ago
cs702|2 years ago
https://www.experimental-history.com/p/the-rise-and-fall-of-...