top | item 39334085

(no title)

nomonnai | 2 years ago

This is an alarming development. Nevertheless, as a scientist, I believe it's an effect of a misguided culture of trust. This culture has, paradoxically, allowed mistrust of the sort described in the article to fester.

> So much of science is built on trust and faith in the ethics and integrity of our colleagues.

This is where things went wrong. We need a culture of "show me," not "trust me," the core of critical rationalism: Establishing the convention that checking each other's work is the only way to advance our understanding of the world.

Figuring new things out is an error-prone process. Sometimes, these errors are not known to a researcher; sometimes, they are known but deemed non-critical; sometimes, a person has ulterior goals that would be endangered by acknowledging and correcting the error. I don't judge. We've all been there.

However, things have been swept under the rug for far too long. If large-scale attempts can only replicate 50% of the studies investigated, published results from psychology cannot and should not be trusted without further checks (https://journals.sagepub.com/doi/10.1177/2515245918810225). The problem may be less acute in other fields, but perhaps only due to a lack of scrutiny (https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...).

Absent an established process to verify a publication's central claims, substitute measures are used, such as "sounds like ChatGPT to me." This is an effect, not a cause, of a culture that values "trust me" over "show me."

discuss

order

No comments yet.