top | item 47111440

We hid backdoors in ~40MB binaries and asked AI + Ghidra to find them

244 points| jakozaur | 7 days ago |quesma.com

98 comments

order

7777332215|7 days ago

I know they said they didn't obfuscate anything, but if you hide imports/symbols and obfuscate strings, which is the bare minimum for any competent attacker, the success rate will immediately drop to zero.

This is detecting the pattern of an anomaly in language associated with malicious activity, which is not impressive for an LLM.

stared|7 days ago

One of the authors here.

The tasks here are entry level. So we are impressed that some AI models are able to detect some patterns, while looking just at binary code. We didn't take it for granted.

For example, only a few models understand Ghidra and Radare2 tooling (Opus 4.5 and 4.6, Gemini 3 Pro, GLM 5) https://quesma.com/benchmarks/binaryaudit/#models-tooling

We consider it a starting point for AI agents being able to work with binaries. Other people discovered the same - vide https://x.com/ccccjjjjeeee/status/2021160492039811300 and https://news.ycombinator.com/item?id=46846101.

There is a long way ahead from "OMG, AI can do that!" to an end-to-end solution.

akiselev|7 days ago

When I was developing my ghidra-cli tool for LLMs to use, I was using crackmes as tests and it had no problem getting through obfuscation as long as it was prompted about it. In practice when reverse engineering real software it can sometimes spin in circles for a while until it finally notices that it's dealing with obfuscated code, but as long as you update your CLAUDE.md/whatever with its findings, it generally moves smoothly from then on.

hereme888|7 days ago

I've used Opus 4.5 and 4.6 to RE obfuscated malicious code with my own Ghidra plugin for Claude Code and it fully reverse engineered it. Granted, I'm talking about software cracks, not state-level backdoors.

halflife|7 days ago

Isn’t LLM supposed to be better at analyzing obfuscated than heuristics? Because of its ability to pattern match it can deduce what obfuscated code does?

Avamander|7 days ago

I have seen LLMs be surprisingly effective at figuring out such oddities. After all it has ingested knowledge of a myriad of data formats, encryption schemes and obfuscation methods.

If anything, complex logic is what'll defeat an LLM. But a good model will also highlight such logic being intractable.

Retr0id|7 days ago

Stripping symbols is fairly normal, but hiding imports ought to be suspicious in its own right.

akiselev|7 days ago

Shameless plug: https://github.com/akiselev/ghidra-cli

I’ve been using Ghidra to reverse engineer Altium’s file format (at least the Delphi parts) and it’s insane how effective it is. Models are not quite good enough to write an entire parser from scratch but before LLMs I would have never even attempted the reverse engineering.

I definitely would not depend on it for security audits but the latest models are more than good enough to reverse engineer file formats.

bitexploder|7 days ago

I can tell you how I am seeing agents be used with reasonable results. I will keep this high level. I don't rely on the agents solely. You build agents that augment your capabilities.

They can make diagrams for you, give you an attack surface mapping, and dig for you while you do more manual work. As you work on an audit you will often find things of interest in a binary or code base that you want to investigate further. LLMs can often blast through a code base or binary finding similar things.

I like to think of it like a swiss army knife of agentic tools to deploy as you work through a problem. They won't balk at some insanely boring task and that can give you a real speed up. The trick is if you fall into the trap of trying to get too much out of an LLM you end up pouring time into your LLM setup and not getting good results, I think that is the LLM productivity trap. But if you have a reasonable subset of "skills" / "agents" you can deploy for various auditing tasks it can absolutely speed you up some.

Also, when you have scale problems, just throw an LLM at it. Even low quality results are a good sniff test. Some of the time I just throw an LLM at a code review thing for a codebase I came across and let it work. I also love asking it to make me architecture diagrams.

jakozaur|7 days ago

Oh, nice find... We end up using PyGhidra, but the models waste some cycles because of bad ergonomics. Perhaps your cli would be easier.

Still, Ghidra's most painful limitation was extremely slow time with Go Lang. We had to exclude that example from the benchmark.

Aeolun|6 days ago

> Models are not quite good enough to write an entire parser from scratch

In my experience models are really good at this? Not one shot, but writing decoders/encoders is entirely possible.

selridge|7 days ago

This is really cool! Thanks for sharing. It's a lot more sophisticated than what I did w/ Ghidra + LLMs.

lima|7 days ago

How does this approach compare to the various Ghidra MCP servers?

stared|7 days ago

Thanks for sharing! It seems to be an active space, vide a recent MCP server (https://news.ycombinator.com/item?id=46882389). I you haven't tried, recommend a lot posting it as Show HN.

I tried a few approaches - https://github.com/jtang613/GhidrAssistMCP (was the harderst to set) Ghidra analyzeHeadless (GPT-5.2-Codex worked with it well!) and PyGhidra (my go-to). Did you try to see which works the best?

I mean, very likely (especially with an explicit README for AI, https://github.com/akiselev/ghidra-cli/blob/master/.claude/s...) your approach might be more convenient to use with AI agents.

mbh159|7 days ago

The methodology debate in this thread is the most important part.

The commenter who says "add obfuscation and success drops to zero" is right but that's also the wrong approach imo. The experiment isn't claiming AI can defeat a competent attacker. It's asking whether AI agents can replicate what a skilled (RE) specialist does on an unobfuscated binary. That's a legitimate, deployable use case (internal audit, code review, legacy binary analysis) even if it doesn't cover adversarial-grade malware.

The more useful framing: what's the right threat model? If you're defending against script kiddies and automated tooling, AI-assisted RE might already be good enough. If you're defending against targeted attacks by people who know you're using AI detection, the bar is much higher and this test doesn't speak to it.

What would actually settle the "ready for production" question: run the same test with the weakest obfuscation that matters in real deployments (import hiding, string encoding), not adversarial-grade obfuscation. That's the boundary condition.

celeryd|7 days ago

Why does that matter? Being oblivious to obfuscated binaries is like failing the captcha test.

Let's say instead of reversing, the job was to pick apples. Let's say an AI can pick all the apples in an orchard in normal weather conditions, but add overcast skies and success drops to zero. Is this, in your opinion, still a skilled apple picking specialist?

magicmicah85|7 days ago

GPT is impressive with a consistent 0% false positive rate across models, yet its ability to detect is as high as 18%. Meanwhile Claude Opus 4.6 is able to detect up to 46% of backdoors, but has a 22% false positive rate.

It would be interesting to have an experiment where these models are able to test exploiting but their alignment may not allow that to happen. Perhaps combining models together can lead to that kind of testing. The better models will identify, write up "how to verify" tests and the "misaligned" models will actually carry out the testing and report back to the better models.

sdenton4|7 days ago

It would be really cool if someone developed some standard language and methodology for measuring the success of binary classificaiton tasks...

Oh, wait, we have had that for a hundred years - somehow it's just entirely forgotten when generative models are involved.

selridge|7 days ago

>While end-to-end malware detection is not reliable yet, AI can make it easier for developers to perform initial security audits. A developer without reverse engineering experience can now get a first-pass analysis of a suspicious binary. [...] The whole field of working with binaries becomes accessible to a much wider range of software engineers. It opens opportunities not only in security, but also in performing low-level optimization, debugging and reverse engineering hardware, and porting code between architectures.

THIS is the takeaway. These tools are allowing *adjacency* to become a powerful guiding indicator. You don't need to be a reverser, you can just understand how your software works and drive the robot to be a fallible hypothesis generator in regions where you can validate only some of the findings.

folex|7 days ago

> The executables in our benchmark often have hundreds or thousands of functions — while the backdoors are tiny, often just a dozen lines buried deep within. Finding them requires strategic thinking: identifying critical paths like network parsers or user input handlers and ignoring the noise.

Perhaps it would make sense to provide LLMs with some strategy guides written in .md files.

godelski|7 days ago

Depends what your research question is, but it's very easy to spoil your experiment.

Let's say you tell it that there might be small backdoors. You've now primed the LLM to search that way (even using "may"). You passed information about the test to test taker!

So we have a new variable! Is the success only due to the hint? How robust is that prompt? Does subtle wording dramatically change output? Does "may", "does", "can", "might" work but "May", "cann", or anything else fail? Have you the promoter unintentionally conveyed something important about the test?

I'm sure you can prompt engineer your way you greater success but by doing so you also greatly expand the complexity of the experiment and consequently make your results far less robust.

Experimental design is incredibly difficult due to all the subtleties. It's a thing most people frequently fail at (including scientists) and even more frequently fool themselves into believing stronger claims than the experiment can yield.

And before anyone says "but humans", yeah, same complexity applies. It's actually why human experimentation is harder than a lot of other things. There's just far more noise in the system.

But could you get success? Certainly. I mean you could tell it exactly where the backdoors are. But that's not useful. So now you got to decide where that line is and certainly others won't agree.

Arech|7 days ago

That's what I thought of too. Given their task formulation (they basically said - "check these binaries with these tools at your disposal" - and that's it!) their results are already super impressive. With a proper guidance and professional oversight it's a tremendous force multiplier.

selridge|7 days ago

That’s hard. Sometimes you will do that and find it prompts the model into “strategy talk” where it deploys the words and frame you use in your .md files but doesn’t actually do the strategy.

Even where it works, it is quite hard to specify human strategic thinking in a way that an AI will follow.

EB66|7 days ago

The fact that Gemini returns the highest rate of fake positives aligns with my experience using the Gemini models. I use ChatGPT, Claude and Gemini regularly and Gemini is clearly the most sycophantic of the three. If I ask those three models to evaluate something or estimate odds of success, Gemini always comes back with the rosiest outlook.

I had been searching for a good benchmark that provided some empirical evidence of this sycophancy, but I hadn't found much. Measuring false positives when you ask the model to complete a detection related task may be a good way of doing that.

simianwords|7 days ago

I'm not an expert but about false positives: why not make the agent attempt to use the backdoor and verify that it is actually a backdoor? Maybe give it access to tools and so on.

jakozaur|7 days ago

So many models refuse to do that due to alignment and safety concerns. So cross-model comparison doesn't make sense. We do, however, require proof (such as providing a location in binary) that is hard to game. So the model not only has to say there is a backdoor, but also point out the location.

Your approach, however, makes a lot of sense if you are ready to have your own custom or fine-tuned model.

shevy-java|7 days ago

So the best one found about 50%. I think that is not bad, probably better than most humans. But what about the remaining 50%? Why were some found and others not?

> Claude Opus 4.6 found it… and persuaded itself there is nothing to worry about > Even the best model in our benchmark got fooled by this task.

That is quite strange. Because it seems almost as if a human is required to make the AI tools understand this.

Tiberium|7 days ago

I highly doubt some of those results, GPT 5.2/+codex is incredible for cyber security and CTFs, and 5.3 Codex (not on API yet) even moreso. There is absolutely no way it's below Deepseek or Haiku. Seems like a harness issue, or they tested those models at none/low reasoning?

jakozaur|7 days ago

As I do eval and training data sets for living, in niche skills, you can find plenty of surprises.

The code is open-source; you can run it yourself using Harbor Framework:

git clone git@github.com:QuesmaOrg/BinaryAudit.git

export OPENROUTER_API_KEY=...

harbor run --path tasks --task-name lighttpd-* --agent terminus-2 --model openrouter/anthropic/claude-opus-4.6 --model openrouter/google/gemini-3-pro-preview --model openrouter/openai/gpt-5.2 --n-attempts 3

Please open PR if you find something interesting, though our domain experts spend fair amount of time looking at trajectories.

stared|4 days ago

I rerun it for GPT-5.2-Codex, for high and xhigh.

Finally, it matches my experience, and it is actually good (as good as the best models for localization, still impressive 0% false positive rate): https://quesma.com/benchmarks/binaryaudit/

Will rerun it on GPT-5.3-Codex shortly, as API is out (yet, the effort does not work correctly, and for "medium" it is very low).

stared|7 days ago

To be honest, it is also our surprise. I mean, I used GPT 5.2 Codex in Cursor for decompiling an old game and it worked (way better than Claude Code with Opus 4.5). We tested for Opus 4.6, but waiting for public API to test on GPT 5.3 Codex.

At the same time, various task can be different, and now all things that work the best end-to-end are the same as ones that are good for a typical, interactive workflow.

We used Terminus 2 agent, as it is the default used by Harbor (https://harborframework.com/), as we want to be unbiased. Very likely other frameworks will change the result.

hilbert42|6 days ago

What this tells me is that the era of code obfuscation through compilation is likely coming to an end. If anyone is able to reverse-engineer a program it'll have huge ramifications for the industry.

This won't be welcomed by software developers who benefit from obfuscation but consumers could benefit. For example, AI could alter a program to remove or add features to suit users' requirements.

Imagine being able to instruct AI to comb through Windows 11 and remove all telemetry and Copilot code and restore local accounts.

I'd be very pleased with an AI agent tnat would do that.

Bender|7 days ago

Along this line can AI's find backdoors spread across multiple pieces of code and/or services? i.e. by themselves they are not back-doors, advanced penetration testers would not suspect anything is afoot but when used together they provide access.

e.g. an intentional weakness in systemd + udev + binfmt magic when used together == authentication and mandatory access control bypass. Each weakness reviewed individually just looks like benign sub-optimal code.

cluckindan|7 days ago

Start with trying to find the xz vulnerability and other software possibly tying into that.

Is there code that does something completely different than its comments claim?

dgellow|7 days ago

Random thoughts, only vaguely related: what’s the impact of AI on CTFs? I would assume that kills part of the fun of such events?

not_a9|7 days ago

Things are pretty brutal and some categories are more affected than others.

A/D seems to be somewhat less affected.

nisarg2|7 days ago

I wonder how model performance would change if the tooling included the ability to interact with the binary and validate the backdoor. Particularly for models that had a high rate of false positives, would they test their hypothesis?

greazy|7 days ago

Very nitpicky but because I spend a lot of time plotting data: don't arbitrarily color the bar plots without at least mentioning cut offs. Why 19% is orange and 20% is green is a mystery.

godelski|7 days ago

It's a pretty common threshold, like 10% is. Be it the 80/20 "Pareto" rule, it's the value of one finger on one hand, or if you really want you stretch the p-value of 0.05 is 1 in 20 odds but that's definitely a stretch though arbitrary anyways. But 20 is a very human number and very common. It's just a division of 5 rather than 4 (I'm assuming you wouldn't have questioned a cutoff at 25%)

BruceEel|7 days ago

Very, very cool. Besides the top-performing models, it's interesting (if I'm reading this correctly) that gpt-5.2 did ~2x better than gpt-5.2-codex.. why?

NitpickLawyer|7 days ago

> gpt-5.2 did ~2x better than gpt-5.2-codex.. why?

Optimising a model for a certain task, via fine-tuning (aka post-training), can lead to loss of performance on other tasks. People want codex to "generate code" and "drive agents" and so on. So oAI fine-tuned for that.

manbash|7 days ago

RE agents are really interesting!

Too bad the author didn't really share the agents they were using so we can't really test this ourselves.

fsniper|7 days ago

So these beat me to identifying backdoors too. This is going places in an alarming pace.

ducktastic|7 days ago

It would be interesting to have some tests run against deliberate code obfuscation next

hereme888|7 days ago

> Claude Opus 4.6 found it… and persuaded itself there is nothing to worry about.

Lol.

> Gemini 3 Pro supposedly “discovered” a backdoor.

Yup, sounds typical for Gemini...it tends to lie.

Very good article. Sounds super useful to apply its findings and improve LLMs.

On a similar note.... reverse engineering is now accessible to the public. Tons of old software is now be easy to RE. Are software companies having issues with this?

openasocket|7 days ago

Ummm, is it a good idea to use AI for malware analysis? I know this is just a proof of concept, but if you have actual malware, it doesn’t seem safe to hand that to AI. Given the lengths of anti-debugging that goes in existing malware, making something to prompt inject, or trick AI to execute something, seems easier.

monegator|7 days ago

the interactive code viewer is neat!

Roark66|7 days ago

And this one demonstration why these "1000 CTOs claim no effectiveness improvement after introducing AI in their companies" are 100% BS.

They may have not noticed an improvement, but it doesn't mean there isn't any.

localuser13|7 days ago

Is it? Gemini 3-pro-preview and 3-flash-preview, respectively top2 and top3, had 44% and 37% true positive and whooping 65% and 86% false positives. This is worse than a coin toss. Anything more than 0% (3% to be generous) is useless in the real world. This leaves only grok and GPT, with 18%, 9% and 2% success rate.

In fact, this is what authors said themselves: "However, this approach is not ready for production. Even the best model, Claude Opus 4.6, found relatively obvious backdoors in small/mid-size binaries only 49% of the time. Worse yet, most models had a high false positive rate — flagging clean binaries." So I'm not sure if we're even discussing the same article.

I also don't see a comparison with any other methodology. What is the success rate of ./decompile binary.exe | grep "(exec|system)/bin/sh"? What is the success rate of state-of-the-art alternative approaches?

snovv_crash|7 days ago

Even without AI, many (most?) orgs are held back by internal processes and politics, not development speed.

HeWhoLurksLate|7 days ago

it also generally takes a heck of a noisy bang for internal developments to make it to the c-suite

stevemk14ebr|7 days ago

These results are terrible, false positives and false negatives. Useless

amelius|7 days ago

Yeah, what does the confusion matrix look like?

shablulman|7 days ago

[deleted]

bangaladore|7 days ago

What's the point of posting what is clearly an AI generated comment.