top | item 42763231

FrontierMath was funded by OpenAI

483 points| wujerry2000 | 1 year ago |lesswrong.com

199 comments

order

agnosticmantis|1 year ago

“… we have a verbal agreement that these materials will not be used in model training”

Ha ha ha. Even written agreements are routinely violated as long as the potential upside > downside, and all you have is verbal agreement? And you didn’t disclose this?

At the time o3 was released I wrote “this is so impressive that it brings out the pessimist in me”[0], thinking perhaps they were routing API calls to human workers.

Now we see in reality I should’ve been more cynical, as they had access to the benchmark data but verbally agreed (wink wink) not to train on it.

[0: https://news.ycombinator.com/threads?id=agnosticmantis#42476... ]

jerpint|1 year ago

You can still game a test set without training on it, that’s why you usually have a validation set and a test set that you ideally seldom use. Routinely running an evaluation on the test set can get the humans in the loop to overfit the data

asadotzler|1 year ago

OpenAI doesn't respect copyright so why would they let a verbal agreement get in the way of billion$

cma|1 year ago

OpenAI's benchmark results looking like Musk's Path of Exile character..

echelon|1 year ago

This has me curious about ARC-AGI.

Would it have been possible for OpenAI to have gamed ARC-AGI by seeing the first few examples and then quickly mechanical turking a training set, fine tuning their model, then proceeding with the rest of the evaluation?

Are there other tricks they could have pulled?

It feels like unless a model is being deployed to an impartial evaluator's completely air gapped machine, there's a ton of room for shenanigans, dishonesty, and outright cheating.

charlieyu1|1 year ago

Why would they use the materials in model training? It would defeat the purpose of having a benchmarking set

teleforce|1 year ago

>perhaps they were routing API calls to human workers

Honest question, did they?

2-3-7-43-1807|1 year ago

verbal agreement ... that's just saying that you're a little dumb or you're playing dumb cause you're in on it.

chvid|1 year ago

Not used in model training probably means it was used in model validation.

lolinder|1 year ago

A co-founder of Epoch left a note in the comments:

> We acknowledge that OpenAI does have access to a large fraction of FrontierMath problems and solutions, with the exception of a unseen-by-OpenAI hold-out set that enables us to independently verify model capabilities. However, we have a verbal agreement that these materials will not be used in model training.

Ouch. A verbal agreement. As the saying goes, those aren't worth the paper they're written on, and that's doubly true when you're dealing with someone with a reputation like Altman's.

And aside from the obvious flaw in it being a verbal agreement, there are many ways in which OpenAI could technically comply with this agreement while still gaining a massive unfair advantage on the benchmarks to the point of rendering them meaningless. For just one example, knowing the benchmark questions can help you select training data that is tailored to excelling at the benchmarks without technically including the actual question in the training data.

aithrowawaycomm|1 year ago

What's even more suspicious is that these tweets from Elliot Glazer indicate that they are still "developing" the hold-out set, even though elsewhere Epoch AI strongly implied this already existed: https://xcancel.com/ElliotGlazer/status/1880809468616950187

It seems to me that o3's 25% benchmark score is 100% data contamination.

sillysaurusx|1 year ago

The questions are designed so that such training data is extremely limited. Tao said it was around half a dozen papers at most, sometimes. That’s not really enough to overfit on without causing other problems.

jsheard|1 year ago

Why do people keep taking OpenAIs marketing spin at face value? This keeps happening, like when they neglected to mention that their most impressive Sora demo involved extensive manual editing/cleanup work because the studio couldn't get Sora to generate what they wanted.

https://news.ycombinator.com/item?id=40359425

th1243127|1 year ago

It might be because (very few!) mathematicians like Terence Tao make positive remarks. I think these mathematicians should be very careful to use reproducible and controlled setups that by their nature cannot take place on GPUs in the Azure cloud.

I have nothing against scientists promoting the Coq Proof Assistant. But that's open source, can be run at home and is fully reproducible.

rvz|1 year ago

Because they are completely gullible and believe almost everything that OpenAI does without questioning the results.

On each product they release, their top researchers are gradually leaving.

Everyone now knows what happens when you go against or question OpenAI after working for them, which is why you don't see any criticism and more of a cult-like worship.

Once again, "AGI" is a complete scam.

refulgentis|1 year ago

Because the models have continually matched the quality they claim.

Ex. look how much work "very few" has to do in the sibling comment. It's like saying "very few physicists [Einstein/Feynman/Witten]"

Its conveniently impossible to falsify the implication that the inverse of "very few" say not positive things. i.e. that the vast majority say negative things

You have to go through an incredible level of mental gymnastics, involving many months of gated decisions, where the route chosen involved "gee, I know this is suspectable to confirmation bias, but...", to end up wondering why people think the models are real if OpenAI has access to data that includes some set of questions.

diggan|1 year ago

> Tamay from Epoch AI here. We made a mistake in not being more transparent about OpenAI's involvement. We were restricted from disclosing the partnership until around the time o3 launched, and in hindsight we should have negotiated harder for the ability to be transparent to the benchmark contributors as soon as possible. Our contract specifically prevented us from disclosing information about the funding source and the fact that OpenAI has data access to much but not all of the dataset.

Not sure if "integrity of the benchmarks" should even be something that you negotiate over, what's the value of the benchmark if the results cannot be trusted because of undisclosed relationships and sharing of data? Why would they be restricted from disclosing stuff you normally disclose, and how doesn't that raise all sorts of warning flags when proposed even?

aunty_helen|1 year ago

This feels like a done deal. This benchmark should be discarded.

bogtog|1 year ago

A lot of the comments express some type of deliberate cheating the benchmark. However, even without intentionally trying to game it, if anybody can repeatedly take the same test, then they'll be nudged to overfit/p-hack.

For instance, suppose they conduct an experiment and find that changing some hyper-parameter yields a 2% boost. That could just be noise, it could be a genuine small improvement, or it may be a mix of a genuine boost along with some fortunate noise. An effect may be small enough that researchers would need to rely on their gut to interpret it. Researchers may jump on noise while believing they have discovered true optimizations. Enough of these types of nudges, and some serious benchmark gains can materialize.

(Hopefully my comment isn't entirely misguided, I don't know how they actually do testing or how often they probe their test set)

madars|1 year ago

I cringe every time I see "my IQ increased by X points after doing Y" posts on Twitter - yes, you had a practice run on Raven's progressive matrices a month ago, that helped, these have a limited question bank and the effect of Y is marginal. That said, obviously, test taking is a skill (separate from background knowledge and both general/domain-specific ability) and should be trained if you expect to have life-altering events based on tests (i.e., do an LSAT course if you want to go to law school). Conversely, shouldn't be done if you think it will limit you through superstition ("I had a score of X, thus I can only perform around level of X+fudge factor"). For an LLM company a good test score is a valuation-altering event!

zarzavat|1 year ago

OpenAI played themselves here. Now nobody is going to take any of their results on this benchmark seriously, ever again. That o3 result has just disappeared in a poof of smoke. If they had blinded themselves properly then that wouldn't be the case.

Whereas other AI companies now have the opportunity to be first to get a significant result on FrontierMath.

colonial|1 year ago

I'd be surprised if any of their in-house benchmark results are taken seriously after this. As an extremely rough estimate, FrontierMath cost five to six figures to assemble [1] - so from an outside view, they clearly have no qualms with turning cash into quasi-guaranteed benchmark results.

[1]: https://epoch.ai/math-problems/submit-problem - the benchmark is comprised of "hundreds" of questions, so at the absolute lowest it cost 300 * 200 = 60,000 dollars.

red75prime|1 year ago

Conversely, if they didn't cheat and they funded creation of the test suite to get "clean" problems (while hiding their participation to prevent getting problems that are somehow tailored to be hard for LLMs specifically), then they have no reasons to fear that all this looks fishy as the test results will soon be vindicated when they'll give wider access to the model.

I refrain from forming a strong opinion in such situations. My intuition tells me that it's not cheating. But, well, it's intuition (probably based on my belief that the brain is nothing special physics-wise and it doesn't manage to realize unknown quantum algorithms in its warm and messy environment, so that classical computers can reproduce all of its feats when having appropriate algorithms and enough computing power. And math reasoning is just another step on a ladder of capabilities, not something that requires completely different approach). So, we'll see.

eksu|1 year ago

This risk could be mitigated by publishing the test.

ripped_britches|1 year ago

Do people actually think OpenAI is gaming benchmarks?

I know they have lost trust and credibility, especially on HN. But this is a company with a giant revenue opportunity to sell products that work.

What works for enterprise is very different from “does it beat this benchmark”.

No matter how nefarious you think sama is, everything points to “build intelligence as rapidly as possible” rather than “spin our wheels messing with benchmarks”.

In fact, even if they did fully lie and game the benchmark - do you even care? As an OpenAI customer, all I care about is that the product works.

I code with o1 for hours every day, so I am very excited for o3 to be released via API. And if they trained on private datasets, I honestly don’t care. I just want to get a better coding partner until I’m irrelevant.

Final thought - why are these contractors owed a right to know where funding came from? I would definitely be proud to know I contributed to the advancement of the field of AI if I was included in this group.

mlsu|1 year ago

Gaming benchmarks has a lot of utility for openAI whether their product works or not.

Many people compare models based on benchmarks. So if openAI can appear better to Anthropic, Google, or Meta, by gaming benchmarks, it's absolutely in their interest to do so, especially if their product is only slightly behind, because evaluating model quality is very very tricky business these days.

In particular, if there is a new benchmark, it's doubly in their interest to game it, because they know that other providers will start using and optimizing performance towards that benchmark, in order to "beat" OpenAI and win market share.

On a personal level, their model is getting beat handily by Claude Sonnet 3.5 right now. It doesn't seem to show in the benchmarks. I wonder why?

This is a company which is shedding their coats of ethics and scientific rigor -- so as to be as unencumbered as possible in its footrace to the dollar.

saithound|1 year ago

Yes, it looks all but certain that OpenAI gamed this particular benchmark.

Otherwise, they would not have had a contract that prohibited revealing that OpenAI was involved with the project until after the o3 announcements were made and the market had time to react. There is no reason to have such a specific agreement unless you plan to use the backdoor access to beat the benchmark: otherwise, OpenAI would not have known in advance that o3 will perform well! In fact, if there was proper blinding in place (which Epoch heads confirmed was not the case), there would have been no reason for secrecy at all.

Google, xAI and Anthropic's test-time compute experiments were really underwhelming: if OpenAI has secret access to benchmarks, that explains why their performance is so different.

jatins|1 year ago

> Do people actually think OpenAI is gaming benchmarks?

I was blown away by chatgpt release and generally have admired OpenAI however I wouldn't put it past them

At this point their entire marketing strategy seems to be to do vague posting on X/Twitter and keep hyping the models so that investors always feel there is something around the corner

And I don't think they need to do that. Most investors will be throwing money at them either way but maybe when you are looking to raise _billions_ that's not enough

maeil|1 year ago

> Do people actually think OpenAI is gaming benchmarks?

Yes, they 100% do. So do their main competitors. All of them do.

cbg0|1 year ago

> Do people actually think OpenAI is gaming benchmarks?

Yes, there's no reason not to do it, only upsides when you try to sell it to enterprises and governments.

331c8c71|1 year ago

Well I certainly won't object if oai marketing was based on testimonials from their fanboy customers instead of rigged benchmark scores %)

Your fragrant disregard for ethics and focus on utilitarian aspects is certainly quite extreme to the extent that only a view people would agree with you in my view.

lionkor|1 year ago

People on here were mocking me openly when I pointed out that you can't be sure LLMs (or any AIs) are actually smart unless you CAN PROVE that the question you're asking isn't in the training set (or adjacent like in this case).

So with this in mind now, let me repeat: Unless you know that the question AND/OR answer are not in the training set or adjacent, do not claim that the AI or similar black box is smart.

pcmoore|1 year ago

I ran a test yesterday on ChatGPT and co-pilot asking first if it knew of a specific paper which it confirmed and then to derive simple results from which it was completely incapable of. I know this paper is not widely referenced (ie few known results in the public domain) but has been available for over 15 years with publicly accessible code written by humans. The training set was so sparse it had no ability to "understand" or even regurgitate past the summary text which it listed almost verbatim.

sitkack|1 year ago

This all smells like the OpenAI CEO's MO. Stupid drama for stupid reasons.

KeplerBoy|1 year ago

It doesn't need to be smart to be useful. A lot of the kind of work I do seems to be in the training set.

MattDaEskimo|1 year ago

There's something gross about OpenAI constantly misleading the public.

This maneuver by their CEO will destroy FrontierMath and Epoch AI's reputation

cbracketdash|1 year ago

Reminds me of the following proverb:

"The integrity of the upright guides them, but the unfaithful are destroyed by their duplicity."

(Proverbs 11:3)

benterix|1 year ago

> Our contract specifically prevented us from disclosing information about the funding source and the fact that OpenAI has data access to much but not all of the dataset.

Man, this is huge.

wujerry2000|1 year ago

My takeaways

(1) Companies will probably increasingly invest in building their own evals for their use cases because its becoming clear public/allegedly private benchmarks have misaligned incentives with labs sponsoring/cheating (2) Those evals will prob be proprietary "IP" - guarded as closely as the code or research itself (3) Conversely, public benchmarks are exhausted and SOMEONE has to invest in funding more frontier benchmarks. So this is prob going to continue.

gunalx|1 year ago

So in conclusion, any evaluation of openai models on frontier math is increadibly invalidated.

I would even go so far as to say this invalidates not only FrontierMath but also anything Epoch AI has and will touch.

Any academic misjudgement like this massive conflict and cheating makes you unthrustworthy in a academic context.

BrenBarn|1 year ago

This kind of thing is so avoidable by anyone who has not sold their soul. The answer is: if a company wants you to do a deal but requires as a condition that you not reveal to anyone that you are doing a deal with that company, you just say no. It's that simple.

Imnimo|1 year ago

My guess is that OpenAI didn't cheat as blatantly as just training on the test set. If they had, surely they could have gotten themselves an even higher mark than 25%. But I do buy the comment that they soft-cheated by using elements of the dataset for validation (which is absolutely still a form of data leakage). Even so, I suspect their reported number is roughly legit, because they report numbers on many benchmarks, and they have a good track record of those numbers holding up to private test sets.

What's much more concerning to me than the integrity of the benchmark number is the general pattern of behavior here from OpenAI and Epoch. We shouldn't accept secretly (even secret to the people doing the creation!) funding the creation of a benchmark. I also don't see how we can trust in the integrity of EpochAI going forward. This is basically their only meaningful output, and this is how they handled it?

riku_iki|1 year ago

> If they had, surely they could have gotten themselves an even higher mark than 25%.

there is potentially some limitation of LLMs memorizing such complex proofs

j_timberlake|1 year ago

Elon definitely still has a grudge against Altman and OpenAI, so when Elon uses his new political power to bludgeon OpenAI to bankruptcy with new regulations and lawsuits, it won't be for the right reasons, but I'll still think Altman and the remaining employees deserve it.

padolsey|1 year ago

Many of these evals are quite easy to game. Often the actual evaluation part of benchmarking is left up to a good-faith actor, which was usually reasonable in academic settings less polluted by capital. AI labs, however, have disincentives to do a thorough or impartial job, so IMO we should never take their word for it. To verify, we need to be able to run these evals ourselves – this is only sometimes possible, as even if the datasets are public, the exact mechanisms of evaluation are not. In the long run, to be completely resilient to gaming via training, we probably need to follow suit of other fields and have third-party non-profit accredited (!!) evaluators who's entire premise is to evaluate, red-team, and generally keep AI safe and competent.

matt_daemon|1 year ago

At this point eval results presented by AI companies are a joke and should not be trusted

WasimBhai|1 year ago

I have been taking a course in AI policy and the O1 and the FrontierMath dataset has been an important mark for me to emphasize the world we are moving toward. It is incredibly sad to know about the conflict of interest here. However, those more knowledgeable, can you explain in plain words, does this revelation compromise OAI's claims regarding o3's performance on FrontierMath problems?

energy123|1 year ago

It's worse than just an undeclared conflict of interest. They gave OpenAI all questions and solutions behind the scenes. It's hard to chalk this up to only naivete. This is a "sorry you caught me" moment.

lolinder|1 year ago

They have an oral agreement that OpenAI won't use the benchmark in training. Which means first and foremost you have to consider the possibility that they broke that oral agreement and actually included the problems in the training set. Even if they didn't, the fact that they had the problems means they could have selectively chosen the training set data to specialize in solving that class of problem, while still technically keeping the verbal agreement.

So, yeah, the benchmark needs to be treated as essentially worthless at this point.

refulgentis|1 year ago

Its increasingly odd to see HN activity that assumes the premise: if the latest benchmark results involved a benchmark that can be shown to have any data that OpenAI could have accessed, then, the benchmark results were intentionally faked.

Last time this confused a bunch of people who didn't understand what test vs. train data meant and it resulted in a particular luminary complaining on Twitter, to much guffaws, how troubling the situation was.

Literally every comment currently, modulo [1] assumes this and then goes several steps more, and a majority are wildly misusing terms with precise meanings, explaining at least part of their confusion.

[1] modulo the one saying this is irrelevant because we'll know if it's bad when it comes out, which to be fair, if evaluated rationally, we know that doesn't help us narrowly with our suspicion FrontierMath benchmarks are all invalid because it trained on (most of) the solutions

EvgeniyZh|1 year ago

Why wouldn't OpenAI cheat? It's an open secret in industry that benchmarks are trained on. Everybody does it, so you need to do that or else your similarly performing model will look worse on paper.

And even they respect the agreement, even using test set as a validation set can be a huge advantage. That's why validation set and test set are two different terms with precise meaning.

As for "knowing it's bad", most people won't be able to tell a model scoring 25% and 10% apart. People who are using these models to solve math problems are tiny share of users and even tinier share of revenues. What OpenAI needs is to convince investors that there is still progress in capabilities going at high pace, and gaming the benchmarks makes perfect sense in this context. 25% was surprising and appeared to surpass expectations, which is exactly what OpenAI needs.

mrg3_2013|1 year ago

OpenAI continues to muddy the benchmarks, while Claude continues to improve their intelligence. Claude will win long term. It'd be wise to not rely on OpenAI at all. They are the first comers who will just burn cash and crash out I suspect.

atleastoptimal|1 year ago

The problem is, any benchmark on a closed model couldn’t be private even in theory, as the model has to be called to run the benchmark, exposing the contents to whoever owns the model thereafter.

HN loves to speculate that OpenAI is some big scam whose seeming ascendance is based on deceptive marketing hype, but o1, to anyone who has tried it seriously is undoubtedly very much within the ballpark of what OpenAI claims it is able to do. If everything they are doing really is just overfitting and gaming the tests, that discrepancy will eventually catch up to them, and people will stop using the APIs and chatgpt

karmasimida|1 year ago

They should at least clarify it. The reason they don’t I feel is simply for the hype and mystique.

There are ways that you could game the benchmark without adding it to the training set. By repetitively evaluating on the dataset itself it will regress into a validation set, not a test set, even in black box setting, as you can simply evaluating 100 checkpoints and pick the one that performs the best, rinse and repeat

I still believe o3 is the real deal, BUT this gimmick kind sour my appetite a bit, for that those who run the company

nottorp|1 year ago

So basically when you need to look good in benchmarks you fund an organization that does benchmarks in which you look good.

Just like toothpaste manufacturers fund dentist's associations etc.

ForHackernews|1 year ago

Unrelated to anything but what software is this blog running on? I love the sidenote feature.

Why does it have a customer service popover chat assistant?

Vecr|1 year ago

The Lightcone Infrastructure forum stack. I don't know why it has an assistant.

zrc108071849|1 year ago

Even if OpenAI does not use these materials to directly train its models, OpenAI can collect or construct more data based on the knowledge points and test points of these questions to gain an unfair competitive advantage. It's like before the Gaokao, a teacher reads some of the Gaokao questions and then marks the test points in the book for you. This is cheating.

suchintan|1 year ago

I wonder if more companies should open source their eval model outputs alongside the eval results

We tried doing that here at Skyvern (eval.skyvern.com)

maeil|1 year ago

This isn't news, the other popular benchmarks are just as gamed and worthless, it would be shocking if this one wasn't. The other frontier model providers game them just as hard, it's not an OpenAI thing. Any benchmark that a provider themselves mentions is not worth the pixels its written on.

floppiplopp|1 year ago

Unless you have been up to the shoulders in the hype-hole of Scam Altman's backside this should not come as the slightest surprise.

moi2388|1 year ago

“… we have a verbal agreement that these materials will not be used in model training”

What about model testing before releasing it?

treksis|1 year ago

so it was overfit

numba888|1 year ago

if they used it in training it should be 100% hit. most likely they used it to verify and tune parameters.

rrr_oh_man|1 year ago

> if they used it in training it should be 100% hit.

Not necessarily, no.

A statistical model will attempt to minimise overall loss, generally speaking.

If it gets 100% accuracy on the training data it's usually an overfit. (Hugging the data points too tightly, thereby failing to predict real life cases)

g-b-r|1 year ago

Had they let it hit 100% it would have been obvious they had the data.

They've sure been careful to avoid that, by only using a portion of it or some other technique

m3kw9|1 year ago

This don’t really matter much because if the models suck when it comes out evals mean nothing next time

katamari-damacy|1 year ago

“we now know how to build AGI” --Sam Altman.

which should really be “we now know how to improve associative reasoning but we still need to cheat when it comes to math because the bottom line is that the models can only capture logic associatively, not synthesize deductively, which is what’s needed for math beyond recipe-based reasoning"