top | item 39203135

(no title)

maranas | 2 years ago

This has been an issue with university rankings for a while. A lot of top US universities engage in this practice too - anecdotal, but I've heard a lot of professors force students to add/remove some citations, or even add their names to the list of people who worked on a paper to help the numbers for their university.

It would be good to see what the criteria is for deciding if a journal is "to be taken seriously". I imagine for example that Chinese or Arabic language journals wouls be published and citsd in journals of those languages. That doesn't necessarily mean that they arent to be raken seriously in the field, it's just that they aren't Western publications.

discuss

order

chriskanan|2 years ago

Regarding inflating the number of authors, it is especially bad in medicine, where I've observed a lot of names being added to papers for "political" reasons, despite the "author" playing no role in the paper.

Some journals now require an "Author Contributions" section to at least partially address this issue.

kjkjadksj|2 years ago

The worst part of this in the biomedical field is the conferences. Thats because sometimes you get toddlers with an advanced degree and a chair position picking the conference presenters, who will unilaterally reject or accept people on the grounds of whether they like them or see them as a competitor, even within the same department, no regards to what the poster or talk might be. At least with journals you have the editor who can sometimes mediate a hotheaded reviewer dispute in a level headed manner.

xqcgrek2|2 years ago

I've seen it go in the other direction too. Groups deliberately not citing other competing groups because it might help them. It's like the other groups don't exist.

chriskanan|2 years ago

I've observed this in multiple AI niches. In some cases I've emailed people saying they ignored very similar work and failed to cite it, and in at least some cases, they were apologetic and said they would update the arXiv version. Although of those times, they do that 50% of the time. Kind of tells you that the reviewers at top AI conferences themselves aren't that familiar with the breadth of the literature.