(no title)
SeanLuke | 1 month ago
Recently I was made aware by colleagues of a publication by authors of a new agent-based modeling toolkit in a different, hipper programming language. They compared their system to others, including mine, and made kind of a big checklist of who's better in what, and no surprise, theirs came out on top. But digging deeper, it quickly became clear that they didn't understand how to run my software correctly; and in many other places they bent over backwards to cherry-pick, and made a lot of bold and completely wrong claims. Correcting the record would place their software far below mine.
Mind you, I'm VERY happy to see newer toolkits which are better than mine -- I wrote this thing over 20 years ago after all, and have since moved on. But several colleagues demanded I do so. After a lot of back-and-forth however, it became clear that the journal's editor was too embarrassed and didn't want to require a retraction or revision. And the authors kept coming up with excuses for their errors. So the journal quietly dropped the complaint.
I'm afraid that this is very common.
mnw21cam|1 month ago
I recommended that the journal not publish the paper, and gave them a good list of improvements to give to the authors that should be made before re-submitting. The journal agreed with me, and rejected the paper.
A couple of months later, I saw it had been published unchanged in a different journal. It wasn't even a lower-quality journal, if I recall the impact factor was actually higher than the original one.
I despair of the scientific process.
timr|1 month ago
This is one of the reasons you should never accept a single publication at face value. But this isn’t a bug — it’s part of the algorithm. It’s just that most muggles don’t know how science actually works. Once you read enough papers in an area, you have a good sense of what’s in the norm of the distribution of knowledge, and if some flashy new result comes over the transom, you might be curious, but you’re not going to accept it without a lot more evidence.
This situation is different, because it’s a case where an extremely popular bit of accepted wisdom is both wrong, and the system itself appears to be unwilling to acknowledge the error.
BLKNSLVR|1 month ago
Schools should be using these kinds of examples in order to teach critical thinking. Unfortunately the other side of the lesson is how easy it is to push an agenda when you've got a little bit of private backing.
a123b456c|1 month ago
bargle0|1 month ago
I was an undergraduate at the University of Maryland when you were a graduate student there in the mid nineties. A lot of what you had to say shaped the way I think about computer science. Thank you.
domoregood|1 month ago
unknown|1 month ago
[deleted]
sizzle|1 month ago
oawiejrlij|1 month ago
neilv|1 month ago
Universities care about money and reputation. Individuals at universities care about their careers.
With exceptions of some saintly individual faculty members, a university is like a big for-profit corporation, only with less accountability.
Faculty bring in money, are strongly linked to reputation (scandal news articles may even say the university name in headlines rather than the person's name), and faculty are hard to get rid of.
Students are completely disposable, there will always be undamaged replacements standing by, and turnover means that soon hardly anyone at the university will even have heard of the student or internal scandal.
Unless you're really lucky, the university's position will be to suppress the messenger.
But if you go in with a lawyer, the lawyer may help your whistleblowing to be taken more seriously, and may also help you negotiate a deal to save your career. (For example of help, you need the university's/department's help in switching advisors gracefully, with funding, even as the uni/dept is trying to minimize the number of people who know about the scandal.)
bflesch|1 month ago
sizzle|1 month ago
consp|1 month ago
Our conclusion was to never trust psychology majors with computer code. And like with any other expertise field they should have shown their idea and/or code to some CS majors at the very least before publishing.
trogdor|1 month ago
How sad. Admitting and correcting a mistake may feel difficult, but it makes you credible.
As a reader, I would have much greater trust in a journal that solicited criticism and readily published corrections and retractions when warranted.
steveklabnik|1 month ago
Personally, I would agree with you. That's how these things are supposed to work. In practice, people are still people.
ameligrana|1 month ago
ameligrana|1 month ago
achillean|1 month ago
pseudohadamard|1 month ago
Now I'm not saying that everything in M-S is junk, but the small subset I was exposed to was.
orochimaaru|1 month ago
From the perspective of the academic community, there will be lower incentive to publish incorrect results if data and code is shared.
ecshafer|1 month ago
They make a lot of claims on how much faster they are than MASON, Netlogo, and Mesa. But in practice I am not finding that to be the case. Also they arent counting the Julia compilation step which takes an absurdly long time, and by the time that gets done similar simulations are already done, then they start the clock on their own benchmark.
Agents.jl and Mesa have the selling point of having better languages / libraries for numerical computation. But thats really a subset of msor ABM I think.
scott_waddell|1 month ago
cannonpalms|1 month ago
unknown|1 month ago
[deleted]
contrarian1234|1 month ago
theyre usually published with a response by the authors
unknown|1 month ago
[deleted]