(no title)
shawntan | 4 months ago
Just check out the original UT paper, or some of it's follow ups: Neural Data Router, https://arxiv.org/abs/2110.07732; Sparse Universal Transformers (SUT), https://arxiv.org/abs/2310.07096. There is even theoretical justification for why: https://arxiv.org/abs/2503.03961
The challenge is actually scaling them up to be useful as LLMs as well (I describe why it's a challenge in the SUT paper).
It's hard to say with the way ARC-AGI is allowed to be evaluated if this is actually what is at play. My gut tells me, given the type of data that's been allowed in the training set, that some leakage of the evaluation has happened in both HRM and TRM.
But because as a field we've given up on actually carefully ensuring training and test don't contaminate, we just decide it's fine and the effect is minimal. Especially considering LLMs, the test set example leaking into the dataset is merely a drop in the bucket (I don't believe we should be dismissing it this way, but that's a whole 'nother conversation).
With these models that are challenge-targeted, it becomes a much larger proportion of what influences the model behaviour, especially if the open evaluation sets are there for everyone to look at and simply generate more. Now we don't know if we're generalising or memorising.
jononor|4 months ago
Of course it is quite costly and also requires some "marketing" to actually get it established.
shawntan|4 months ago
If it is to test generalisation capability, then what data the model being evaluated is trained on is crucial to making any conclusions.
Look at the construction of this synthetic dataset for example: https://arxiv.org/pdf/1711.00350
YeGoblynQueenne|4 months ago
"Now" starts around 1980 I'd say. Everyone in the field tweaks their models until they perform well on the "held-out" test set, so any ability to estimate generalisation from test-set performance goes out the window. The standard 80/20 train/test split makes it even worse.
I personally find it kind of scandalous that nobody wants to admit this in the field and yet many people are happy to make big claims about generalisation, like e.g. the "mystery" of generalising overparameterised neural nets.
shawntan|4 months ago
You can still game those benchmarks (tune your hyperparameters after looking at test results), but that setting measures for generalisation on the test set _given_ the training set specified. Using any additional data should be going against the benchmark rules, and should not be compared on the same lines.
cubefox|4 months ago
The Arc HRM blog post says:
> [...] we set out to verify HRM performance against the ARC-AGI-1 Semi-Private dataset - a hidden, hold-out set of ARC tasks used to verify that solutions are not overfit [...] 32% on ARC-AGI-1 is an impressive score with such a small model. A small drop from HRM's claimed Public Evaluation score (41%) to Semi-Private is expected. ARC-AGI-1's Public and Semi-Private sets have not been difficulty calibrated. The observed drop (-9pp) is on the high side of normal variation. If the model had been overfit to the Public set, Semi-Private performance could have collapsed (e.g., ~10% or less). This was not observed.
shawntan|4 months ago
This would all go out the window if the model being evaluated can _see_ the type of distribution shift it would encounter during test time. And it's unclear whether the shift is the same in the hidden set.
There are questions about the evaluations that arise from the large model performance against the smaller models, especially given the ablation studies. Are the large models trained on the same data as these tiny models? Should they be? If they shouldn't, then why are we allowing these small models access to these in their training data?