top | item 47143500

(no title)

sixtyj | 6 days ago

Well… it is happening. You can’t put spilled milk back to bottle. You can do future requirements that will try to stop this behaviour.

E.g. in the submission form could be a mandatory field “I hereby confirm that I wrote the paper personally.” In conditions there will be a note that violating this rule can lead to temporary or permanent ban of authors. In the world where research success is measured by points in WOS, this could lead to slow down the rise of LLM-generated papers.

discuss

order

asdfman123|6 days ago

Maybe we need to find a new metric to judge academics by beyond quantity of papers

AuryGlenz|5 days ago

Unironically, maybe they should be scored by LLMs? My first thought was that the reviewers could score the papers but that would lead to even more group-think.

Ideally whoever is paying the academics should just be paying attention to their work and its worth, but that would be crazy.

tossandthrow|6 days ago

This approach dismisses the cases where Ai submissions generally are better.

I don't think this is appreciated enough: a lot of Ai adaptation is not happening because of cost on the expense of quality. Quite the opposite.

I am in the process of switching my company's use of retool for an Ai generated backoffice.

First and foremost for usability, velocity and security.

Secondly, we also save a buck.

moregrist|6 days ago

> This approach dismisses the cases where Ai submissions generally are better.

You’re perhaps missing the not so subtle subtext of Peter Woit’s post, and entire blog, which is:

While AI is getting better, it’s still not _good_ by the standards of most science. However it’s as good as hep-th where (according to Peter Woit) the bar is incredibly low. His thesis is part “the whole field is bad” and part “Arxiv for this subfield is full of human slop.”

I don’t have the background to engage with whether Peter Woit’s argument has merit, but it’s been consistent for 25+ years.