top | item 43130732

Some critical issues with the SWE-bench dataset

350 points| joshwa | 1 year ago |arxiv.org

116 comments

order

comex|1 year ago

Some of the examples in the paper seem to be wrong.

For django-31056, they claim the AI-generated patch is "incomplete" because it's "missing critical parts of this logic, such as the try-except block and the check for a running event loop.". But if you look at the diff, that's clearly wrong. The try-except block and running check were already there before the patch. The human patch just indented them, making them appear as both - and +, while the AI patch didn't. To me, the AI patch seems correct. It's slightly less efficient than the human patch when DJANGO_ALLOW_ASYNC_UNSAFE is set, but slightly more efficient when it isn't (which is the common case!). The human patch does feel more natural, but the AI patch is fine. I'd grade it a tie between human and AI.

For django-32517, they claim that the human and AI patches "produce entirely different outputs", but actually they do exactly the same thing. The human version has `reversed(self.dict)`, while the AI version has `reversed(self.dict.keys())`. `reversed` treats the object as an iterator, and iterating over a dictionary in Python just gives you the keys, so it doesn't matter whether you call `.keys()` first. The human patch is more idiomatic, but it's also more confusing, as shown by the fact that it confused the authors of this paper. I'd grade it another tie.

Edit: I tried to sign up for OpenReview so I could leave a comment about this, but the system wouldn't let me register without completing a form that assumes you have an academic position. Perhaps I should email the authors.

fourpostmaun2|1 year ago

The entire premise of this paper is false. They claim that the "hints_text" is used and leaks the answer in Section 2.1.1; however, the authors of SWE-Bench themselves state that this is not used anywhere (Issue #133 on the official SWE-Bench GitHub).

According to the paper:

> 1. Solution leak: represents instances where the solution to the issue is clearly outlined in the issue description or comments on GitHub. Since both the issue descriptions and comments (referred to as hints_text in the SWE-Bench study) are provided as input to the models, these LLM models can extract the solutions directly from this information instead of generating it independently.

And yet, the SWE-Bench authors themselves explicitly state:

> In short, for participating on the SWE-bench leaderboard, using hints_text in any manner is not allowed. Although we don't explicitly say this in the original paper, we also do not make any mention of using the hints_text anywhere.

So, it's a made up issue that would only occur if you deviated from the paper implementation and explicitly added a field called "hints" that isn't used anywhere.

throwaway0123_5|1 year ago

> For django-32517

Although I agree with your analysis and it doesn't look great for the authors, this issue (https://code.djangoproject.com/ticket/32517) arguably falls into their "Solution leak" category anyways, as the following text appears in the issue description (and so I think directly in `problem_statement` rather than `hints_text`):

> Currently, OrderedSet isn't reversible (i.e. allowed to be passed as an argument to Python's reversed()). This would be natural to support given that OrderedSet is ordered. This should be straightforward to add by adding a __reversed__() method to OrderedSet.

It isn't the exact code though, so I suppose it could be argued instead that the issue is just extremely easy.

codelion|1 year ago

Interesting analysis! I hadn't dug into the specific patch details like that. It's a good reminder that "correctness" isn't always the only dimension to evaluate these AI-generated patches – readability and idiomatic style definitely matter too, even if the functional outcome is the same.

I've been playing around with some automated code review tools recently, and it's surprising how often they flag things that are technically correct but just... unusual. Style matters, especially for maintainability.

_cs2017_|1 year ago

I can only confirm two mistakes in the apper: 1) As you say, the reversed(self.dict) is actually correct; 2) as another poster below said, hints are not part of the input. These two mistakes are so egregious given the objective of the paper that I'm convinced the authors are not qualified to write it.

IMHO, it is probably better to discard this paper, and wait for someone else to cover this important topic.

petters|1 year ago

I think you should. Looks like there is more work to do

siva7|1 year ago

The paper should be then retracted.

modeless|1 year ago

> When we filtered out these problematic issues, the resolution rate of SWE-Agent+GPT-4 dropped from 12.47% to 3.97%.

This matches my intuition about the coding performance of these models a lot better. I don't think any current coding benchmark accurately measures coding performance.

OsrsNeedsf2P|1 year ago

Anecdotal but I was always shocked to see Claude 3.5 perform so poorly in the benchmarks, when it generates 80% of my code in Cursor (and in cases it fails, no other model succeeds)

theturtletalks|1 year ago

I personally use Aider's Polyglot Benchmark [0] which is a bit low-key and not gamed just yet. It matches my experience too where Claude Sonnet 3.5 is the best and still beats the new reasoning models like o3-mini, DeepSeek, etc.

0. https://aider.chat/docs/leaderboards/

delusional|1 year ago

> where the resolution rates of the models drop significantly, which are 0.73%, 0.55%, and 3.83%, respectively.

Matches my experience pretty well as too. It'll usually output something that a novice would assume is correct but an expert can clearly identify as "know it all teenager forum post" level stuff.

alfalfasprout|1 year ago

Yep anecdotally that's basically spot-on. It's also one of the reasons that I still find copilot vastly more useful than highly autonomous AI tooling (cursor, roocode, avante, etc.)

siva7|1 year ago

o3-mini and gpt-4o are so piss poor in agent coding compared to claude that you don't even need a benchmark

avs733|1 year ago

It is worth reflecting, as much as HN seems to hate the social sciences, on this point. But the difficulty of measuring intelligence is a challenge that several fields have struggled with for decades. It is inherently hard because defining intelligence and building intelligence are very closely coupled. This both makes it hard to make unbiased measures as well making measures that don't affect the phenomenon basically NP hard, or known as the Flynn effect[0].

It also goes to how a lot of people misunderstand the replication crisis. 'Hard science' really should replicate - we should be able filter out sources fo error and variance because the phenomena (generally) isn't affected by our attempts to measure it. Making social science replicate often requires so much control that it is deabstracted from reality, meaning the effort at replication reduces the value and usefulness of the knowledge. Generalizable claims are hard because the sources of variance are so much larger adn more complex. Speaking as someone who went through a transition from engineering to social sciences, it is the concept that made it hard. I started my time in social sciences with a cool idea of a whole carrer based on just doing replication studies, because science. That was...useful and stupid at the same time.

[0] https://en.wikipedia.org/wiki/Flynn_effect

0x20cowboy|1 year ago

It matches my experience as well.

I find the models very useful to chat about library documentation or high level algorithm concepts, but I find the code it generates to be… I don’t know how else to say it… really bad and often out of context.

I know developers who blindly follow the hype and use them to generate production code. That scares the poop emoji out of me, and the code reads like an asset flipped 3D game.

bearjaws|1 year ago

I would argue almost every popular benchmark quoted by the big LLM companies is tainted.

OAI, xAI, Antropic, Google all score incredibly well, then you go to try and write code and its just okay.

They claim it can do PHD level reasoning, but here I am not trusting it on basic computational thinking.

vonneumannstan|1 year ago

>They claim it can do PHD level reasoning, but here I am not trusting it on basic computational thinking.

Not sure that's really the claim. I think they claim that performance on benchmarks like GPQA indicate PhD level knowledge of different fields.

jandrese|1 year ago

Yeah, that's true in many fields with these AI agents. They demo well, but when you put them to actual work they fall right on their face. Even worse, the harder the task you set for them the more they lie to you. It's like hiring a junior dev from one of those highly regimented societies where it's more important to save face than to get the job done.

washadjeffmad|1 year ago

To be totally fair, using PhD as a barometer of anything without specifying what is like claiming that LLMs have encyclopedic knowledge while meaning a children's encyclopedia.

hackernewds|1 year ago

The popular benchmarks are the ones that have already leaked. think about it

ukFxqnLa2sBSBf6|1 year ago

There’s a few things I’m not understanding here.

1. Did the benchmark authors not review the issues and make sure the solution was not present in the issue?

2. Are the issues locked after they’re included in the dataset? You’d think they would be immutable for reproducibility.

3. For the agents writing patches, is test running part of their inner loop validation? If they write a patch that makes the test pass, then the jobs done. Or is that validation step kept secret from the agent? I don’t see how unless the tests aren’t part of the repo.

sebzim4500|1 year ago

>1. Did the benchmark authors not review the issues and make sure the solution was not present in the issue?

I looked at a bunch of issues in the dataset when SWE-verified first game out and I was trying to make scaffolding to solve it and I don't remember a single time where the solution existed verbatim in the issue. I'm not saying it never happens, but it would have to be rare.

> 2. Are the issues locked after they’re included in the dataset?

No one changes the issues in the dataset but of course the original issue on github will have been resolved long ago. The models don't have access to this in their context, but if they were trained on github there's a very real risk that they've seen the solution.

> 3. For the agents writing patches, is test running part of their inner loop validation? If they write a patch that makes the test pass, then the jobs done. Or is that validation step kept secret from the agent? I don’t see how unless the tests aren’t part of the repo.

The tests aren't provided to the model, they are run after the model has proposed its final answer.

jbellis|1 year ago

Especially with swe-verified, I thought that was the whole point of that dataset

dang|1 year ago

Submitted title was "SWE-Bench tainted by answer leakage; real pass rates significantly lower". Normally we'd replace that with the article title, in keeping with the site guideline ("Please use the original title, unless it is misleading or linkbait; don't editorialize."), but in this case the article title is so generic that this is arguably misleading as well, so I took a representative phrase from the abstract instead. That's preferable, because it's better to use the authors' own representation of their article.

If anyone can find a better title (i.e. more accurate and neutral, preferably using language from the article itself) we can change it again.

https://news.ycombinator.com/newsguidelines.html

semi-extrinsic|1 year ago

So what we need is something like a versioned crowdsourced coding LLM eval dataset.

Every quarter, you have a couple thousand volunteers provide 2 GitHub issues from the past 3 months, which are nontrivial to resolve, and where there exists strong test cases. Each volunteer then cross-checks 2 issues from other volunteers. The volunteers get 1 month free subscription to some AI service in return.

This dataset is then published as SWE-UberBench-2025-02 or something. People can then only evaluate their coding LLM on datasets published after their training period.

delusional|1 year ago

And why would these "couple of thousand volunteers" help with this?

nitwit005|1 year ago

If you know some way to get people to volunteer millions of dollars of free labor, there are better uses of their time than evaluating LLMs.

SR2Z|1 year ago

Right, so that AI companies can freely throw this significantly more valuable training data into a model and then turn around and advocate for clamping down on the freedom of models.

optimalsolver|1 year ago

You need benchmarks with the following three properties:

1) No known solutions, so there's no "ground truth" dataset to train on

2) Presumably hard to solve

3) But easy to verify a solution if one is provided.

This, of course, is easier done on the STEM side of things, but how do you automatically test creativity, or philosophical aptitude?

hsuduebc2|1 year ago

I guess it's purely subjective. Maybe some internal commission if it comes to quality of creative work?

huac|1 year ago

> 32.67% of the successful patches involve cheating as the solutions were directly provided in the issue report or the comments.

Looking at the benchmark, https://www.swebench.com/, about half of scored submissions score under 1/3 correct? So they're either not cheating, or not cheating effectively?

sebzim4500|1 year ago

LLMs do not reliably reproduce their training data. This is quite easy to demonstrate, every LLM has been trained on all of wikipedia (at minimum) and yet there if you ask it a niche fact mentioned once on wikipedia it is highly likely to get it wrong.

nraynaud|1 year ago

yeah, in the abstract they demoted the score from 12% to 3%, so sadly retirement is not yet here :(

perrygeo|1 year ago

The solution moving forward has to be private benchmark suites. I could see teams investing in their own set of programming challenges and periodically re-evaluating them - similar to how we would construct sets of live interview questions for candidates and qualitatively assess their ability.

It's so vital that it's not leaked and that it's fit-for-purpose and manually assessed. These general purpose, public benchmarks based on questionable metrics are effectively worthless to assess real programming skill.

Case in point, as others have mentioned here, Claude scores modestly on these benchmarks but vastly better than the alternatives in practice. I don't trust Claude fully but far more than OpenAI models; it's not even close. The IRL performance advantage is not reflected in any of these benchmarks.

brap|1 year ago

My own impression with SoTA models is that they’re very useful for coding, yet they suck ass for solving unique problems (which is the case for every sufficiently large codebase).

MattDaEskimo|1 year ago

There's a serious issue with benchmarks.

Instead of resolving it, some leaders are further complicating their meaning

Such as OpenAI grading their benchmarks based on "how much money they made" or "how easy a model was convinced to hand over fake money".

otterley|1 year ago

I am shocked—shocked—when a vendor cheats in order to increase their benchmark scores.

I always tell my customers to ignore benchmarks and compare outcomes with their own workloads. Benchmarks are almost completely useless in the real world.

Snuggly73|1 year ago

I only trust benchmarks that I’ve faked myself :)

commandlinefan|1 year ago

Although I believe there's a lot of this going on, in this case it just appears to be incompetence rather than malice.

adamc|1 year ago

I don't know why you are getting downrated. That is sane advice.

1024core|1 year ago

To quote Goodhart's Law: When a measure becomes a target, it ceases to be a good measure.

Or, as in the case of LLMs and benchmarks: When a benchmark becomes a target, it ceases to be a good benchmark.

OldGreenYodaGPT|1 year ago

> solutions were directly provided in the issue report or the comments

This is fine, many of my real tickets already explain the solution. A good ticket often offers a solution or where to start looking.

softwaredoug|1 year ago

Yep that's fine for an issue, but a problem if you're trying to eval whether AIs can solve coding problems.

ionwake|1 year ago

I was wondering how long this would take to surface, you can tell a surprising amount just by carefully watching how the trainers answer interview questions, which is kinda meta really.

shayanh|1 year ago

I found that this paper was submitted to ICLR, but got rejected: https://openreview.net/forum?id=pwIGnH2LHJ

To me the analysis of SWE-Bench is a solid contribution and informative. My guess is that to meet conference's submission bar they had to come up with their own bench (SWE-Bench+), which wasn't thorough enough and the paper got rejected mainly because of that.

vonneumannstan|1 year ago

Acceptance or rejection at big ML Conferences doesn't seem to carry much signal either way anymore. Completely saturated by grift and poor quality so each paper should be evaluated independent of their Conference status imo.

acc_297|1 year ago

> 32.67% of the successful patches involve cheating as the solutions were directly provided in the issue report or the comments.

Is this what Hofstadter means by a strange-loop?

andrepd|1 year ago

Turns out "AI deep research reasoning agent" was just "we can print the training set"

alalv|1 year ago

Something weird (or at least uncommon) that has caught my attention and I havent seen mentioned in the comments is that they cite the swe-bench paper author by first name in the abstract, Carlos et al, and then by last name (as it is usually done) in the paper, Jimenez et al.

htrp|1 year ago

Paper from October 2024