(no title)
JonathanRaines | 7 months ago
This work does have some very interesting ideas, specifically avoiding the costs of backpropagation through time.
However, it does not appear to have been peer reviewed.
The results section is odd. It does not include include details of how they performed the assesments, and the only numerical values are in the figure on the front page. The results for ARC2 are (contrary to that figure) not top of the leaderboard (currently 19% compared to HRMs 5% https://www.kaggle.com/competitions/arc-prize-2025/leaderboa...)
cs702|7 months ago
In fields like AI/ML, I'll take a preprint with working code over peer-reviewed work without any code, always, even when the preprint isn't well edited.
Everyone everywhere can review a preprint and its published code, instead of a tiny number of hand-chosen reviewers who are often overworked, underpaid, and on tight schedules.
If the authors' claims hold up, the work will gain recognition. If the claims don't hold up, the work will eventually be ignored. Credentials are basically irrelevant.
Think of it as open-source, distributed, global review. It may be messy and ad-hoc, since no one is in charge, but it works much better than traditional peer review!
smokel|7 months ago
If a professional reviewer spots a serious problem, the paper will not make it to a conference or journal, saving us a lot of trouble.
hodgehog11|7 months ago
Real peer review is when other experts independently verify your claims in the arXiv submission through implementation and (hopefully) cite you in their followup work. This thread is real peer review.
naasking|7 months ago
Which is fine, because peer review is not a good proxy for quality or validity.
dleeftink|7 months ago
rapatel0|7 months ago
Having been both a publisher and reviewer across multiple engineering, science, and bio-medical disciplines this occurs across academia.
d4rkn0d3z|7 months ago
PJones2000|7 months ago
JonathanRaines|7 months ago
diwank|7 months ago
mitthrowaway2|7 months ago
A peer reviewer will typically comment that some figures are unclear, that a few relevant prior works have gone uncited, or point out a followup experiment that they should do.
That's about the extent of what peer reviewers do, and basically what you did yourself.
halayli|7 months ago
frozenseven|7 months ago
Enough already. Please. The paper + code is here for everybody to read and test. Either it works or it doesn't. Either people will build upon it or they won't. I don't need to wait 20 months for 3 anonymous dudes to figure it out.
riku_iki|7 months ago
my observation is that peer reviewers never try to reproduce results or do basic code audit to check that there is no data leak for example to training dataset.
sigmoid10|7 months ago
bubblyworld|7 months ago
Your criticism makes sense for the maze solving and sudoku sets, of course, but I think it kinda misses the point (there are traditional algos that solve those just fine - it's more about the ability of neural nets to figure them out during training, and known issues with existing recurrent architectures).
Assuming this isn't fake news lol.