top | item 42496903

(no title)

obastani | 1 year ago

It's not about training directly on the test set, it's about people discussing questions in the test set online (e.g., in forums), and then this data is swept up into the training set. That's what makes test set contamination so difficult to avoid.

discuss

order

joe_the_user|1 year ago

Yes,

That is the "reality" - that because companies can train their models on the whole Internet, companies will train their (base) models on the entire Internet.

And in this situation, "having heard the problem" actually serves as a barrier to understanding of these harder problems since any variation of known problem will receive a standard "half-assed guestimate".

And these companies "can't not" use these base models since they're resigned to the "bitter lesson" (better the "bitter lesson viewpoint" imo) that they need large scale heuristics for the start of their process and only then can they start symbolic/reasoning manipulations.

But hold-up! Why couldn't an organization freeze their training set and their problems and release both to the public? That would give us an idea where the research stands. Ah, the answer comes out, 'cause they don't own the training set and the result they want to train is a commercial product that needs every drop of data to be the best. As Yan LeCun has said, this isn't research, this is product development.

phkahler|1 year ago

>> It's not about training directly on the test set, it's about people discussing questions in the test set online

Don't kid yourself. There are 10's of billions of dollars going into AI. Some of the humans involved would happily cheat on comparative tests to boost investment.

xmprt|1 year ago

The incentives are definitely there, but even CEOs and VCs know that if they cheat the tests just to get more investment, they're only cheating themselves. No one is liquidating within the next 5 years so either they end up getting caught and lose everything or they spent all this energy trying to cheat while having a subpar model which results in them losing to competitors who actually invested in good technology.

Having a higher valuation could help with attracting better talent or more funding to invest in GPUs and actual model improvements but I don't think that outweighs the risks unless you're a tiny startup with nothing to show (but then you wouldn't have the money to bribe anyone).