top | item 46089316

(no title)

itkovian_ | 3 months ago

Whether it’s actually 20% or not doesn’t matter, everyone is aware the signal of the top confs is in freefall.

There are also rings of reviewer fraud going on where groups of people in these niche areas all get assigned their own papers and recommend acceptance and in many cases the AC is part of this as well. Am not saying this is common but it is occurring.

It feels as if every layer of society is in maximum extraction mode and this is just a single example. No one is spending time to carefully and deeply review a paper because they care and they feel on principal that’s the right thing to do. People did used to do this.

discuss

order

itkovian_|3 months ago

The argument is that there is no incentive to carefully review a paper (I agree), however what used to occur is people would do the right thing without explicit incentives. This has totally disappeared.

bee_rider|3 months ago

The concept of the professional has been basically obliterated in our society. Instead we have people doing engineering, science, and doctoring as, just, jobs. Individual contributors of various flavors to be shuffled around by middle management.

Without professions, there are no more professional communities really, no more professional standards to uphold, no reason to get in the way of somebody’s publications.

isoprophlex|3 months ago

If the Zucc has a weird day he starts dropping 10-100M salary packages in order to poach AI researchers. No wonder the game is getting rigged up the butthole.

apf6|3 months ago

to some degree this is a "market correction" on the inherent value of these papers. There's way too many low-value papers that are being published purely for career advancement and CV padding reasons. Hard to get peer reviewers to care about those.

araes|3 months ago

> spending time to carefully and deeply review a paper because they care and they feel on principal that’s the right thing to do

Generally agree, although several parts of that issue.

One of the first was covered by a paper back in 2023 that speaks to the issue about maximum extraction mode. [1] Fairness, honesty, and loyalty are usually rewarded with exploitation. If you spend time to carefully and deeply review the paper, then that ironically marks you as someone that can be exploited. You're implicitly marked as someone who will make personal sacrifices for the academic community and allow even more awful behavior to be piled on top of you. Unless they're caught with something especially egregious, the people that don't, get promoted, spend less time on reviews, and get further rewards.

[1] https://www.sciencedirect.com/science/article/abs/pii/S00221...

The academic community has talked about this a bunch for years. Editors / reviewers that don't paid, or get minimal payment, and sacrifice large amounts of their personal time effectively volunteering, while authors pay $1000's for each paper submitted, and then journals charge $10,000's for each subscription. It's been talked about for decades, and yet in all that time, very little has actually occurred to change the situation.

Another part on top of the "deeply reviewing papers" is that the sheer volume has massively increased (which has been an issue in a bunch of industries, sci-fi compilation Clarkesworld broke for quite a while in 2023 for similar reasons [2]). In the land of "type a sentence, and get a free academic paper" the extremely prolific are pouring out a paper a month, sometimes greater amounts. In areas like clinical medicine, hyper-prolific publishing has hit 70+ papers a year rates. [3] ~1.5 papers a week. Every few days somebody cranks out yet another paper that needs to be reviewed. In the article linked, one author had 140 articles to a single journal alone. Almost 3 times a week, all year long, you've got a paper claiming research worthy of publishing you need to review.

[2] https://neil-clarke.com/how-ai-submissions-have-changed-our-...

[3] https://www.sciencedirect.com/science/article/pii/S175115772...

One that I have less direct, citeable proof for, yet am rather suspicious of, is that theft has also dramatically increased with a huge surge in invasive monitoring and snooping. If my TV changes what I'm watching, and what's recommended, because I typed a text message to somebody, it seems likely that a lot of academia is also dealing with massive intellectual theft issues. This then heavily prioritizes pouring out material as quickly as possible, with as little effort as possible, to get the equivalent of first post and maximal posts, before it can be scraped, exfiltrated, and published by somebody else.

Finally, a lot of the reward and incentive has become metric chasing. Publish or Perish [4] and the Replication Crisis [5] are relatively well known ideas. Citation is a proxy of the impact of a paper, tenure and advancement is heavily related to quantity of publications and citations, and researchers would prefer to be cited more. And weirdly, if it does not work, and it's junk work, in a theme with the above, then it has been suggested nonreplicable publications are cited more than replicable ones [6]. In the linked paper, the view is that when "interesting" findings are published, they get more views, more media, more citations, and lower review standards get applied. And afterward there's very little social punishment for proving the results are false and not replicable (or reward for those illustrating lack of reproducability). Notably, the paper actually got a counterpoint stating that in psychology at least, lack of replication eventually predicts citation decline [7] (cited by 10), while the original actually got its authors ~250 citations, and a bunch of media mentions.

[4] https://en.wikipedia.org/wiki/Publish_or_perish

[5] https://en.wikipedia.org/wiki/Replication_crisis

[6] https://www.science.org/doi/10.1126/sciadv.abd1705

[7] https://www.pnas.org/doi/10.1073/pnas.2304862120