top | item 41269791

LLM and Bug Finding: Insights from a $2M Winning Team in the White House's AIxCC

164 points| garlic_chives | 1 year ago |team-atlanta.github.io | reply

32 comments

order
[+] hqzhao|1 year ago|reply
I'm part of the team, and we used LLM agents extensively for smart bug finding and patching. I'm happy to discuss some insights, and share all of the approaches after grand final :)
[+] wslh|1 year ago|reply
Congrats! ELI5: what insights do you have NOW that were not published/researched extensively in academic papers and/or publicly discussed yet?
[+] adragos|1 year ago|reply
Hey, congrats on getting to the finals of AIxCC!

Have you tested your CRS on weekend CTFs? I’m curious how well it’d be able to perform compared to other teams

[+] doctorpangloss|1 year ago|reply
Everyone thinks bug bounties should be higher. How high should they be? Who should pay for them?
[+] simonw|1 year ago|reply
What kind of LLM agents did you use?
[+] garlic_chives|1 year ago|reply
AIxCC is an AI Cyber Challenge launched by DARPA and ARPA-H.

Notably, a zero-day vulnerability in SQLite3 was discovered and patched during the AIxCC semifinals, demonstrating the potential of LLM-based approaches in bug finding.

[+] rfoo|1 year ago|reply
Notably, an undiscovered trivial NULL pointer dereference in SQLite3's SQL parser was discovered and patched. But yeah, it makes very good marketing material.
[+] hypeatei|1 year ago|reply
Is there any write ups or CVE pages on that vulnerability? From a quick search, I can't find anything.
[+] sim7c00|1 year ago|reply
this is really impressive work. coverage guided and especially directed fuzing can be extremely difficult. its mentioned fuzzing is not a dumb technique. I think the classical idea is kind of dumb, in the sense of 'dumb fuzzers' but these days there is tons of intelligence built around it now aand poured into it, but i've always thought its now beyond the classic idea of fuzz testing. i had colleagues who poured their soul into trying to use git commit info etc. to try and help find potentially bad code paths and then coverage guided fuzzing trying to get in there. I really like the little note at the bottom about this. adding such layers kind of does make it lean towards machine learning nowadays, and id think perhaps fuzzing is not the right term anymore. i dont think many people are actually still simply generating random inputs and trying to crash programs like that.

this is really exciting new progress around this type of field guys. well done! cant wait to see what new tools and techniques will be yielded from all of this research.

Will you guys be open to implementing something around libafl++ perhaps? i remember we worked with that extensively. As a lot of shops use that already it might be cool to look at integration into such tools or would you think this deviates so far it'll amount to a new kind of tool entirely? Also, the work on datasets might be really valuable to other researchers. there was a mention of wasted work but labeled sets of data around cve, bug and patch commits can help a lot of folks if theres new data in there.

this kind of makes me miss having my head in this space :D cool stuff and massive congrats on being finalists. thanks for the extensive writeup!

[+] rockskon|1 year ago|reply
The AIxcc booth felt like it was meant for a tradeshow as opposed to being a place where someone could learn something.
[+] hqzhao|1 year ago|reply
I heard that the AIxCC booth prepared the same challenges for the audience to solve manually, but I didn’t check the details.

I believe there will be even more cool stuff in next year’s grand final. If you want to get a sense of what to expect, check out the DARPA CGC from 2016. :)