top | item 41562034

We fine-tuned an LLM to triage and fix insecure code

75 points| asadeddin | 1 year ago |corgea.com

63 comments

order

tptacek|1 year ago

I've been playing with o1 on known kernel LPEs (a drum I've been beating is how good OpenAI's models are with Linux kernel stuff, which I assume is because there is such a wealth of context to pull from places like LKML) and it's been very hit-or-miss. In fact, it's actually pretty SAST†-like in its results: some handwavy general concerns, but needs extra prompting for the real stuff.

The training datasets here also seem pretty small, by comparison? "Hundreds of closed source projects we own"?

It'd be interesting to see if it works well. This is an easy product to prove: just generate a bunch of CVEs from open source code.

SAST is enterprise security dork code for "security linter"

asadeddin|1 year ago

Unfortunately, I realized the sentence reads weirdly. It's meant to say we use hundreds of repositories: close-source projects we own + open-source projects that are vulnerable by design + open source projects. I've updated the language in the post.

It's very true. SAST is really enterprise security dork code for "security linter"! I might start using that with some of our developer facing content.

We launched a recent project that combines LLMs + Static code analysis to detect more sophisticated business and code logic findings to get more real stuff. We wanted to follow the industry a bit more to create familiarity but a differentiation too in this type and we called it BLAST (Business Logic Application Security Testing).

asadeddin|1 year ago

I'm Ahmad, the founder of Corgea. We're building an AI AppSec engineer to help developers automatically triage and fix insecure code. We help reduce 30% of SAST findings with our false positive detection and accelerate remediation by ~80%. To do this for large enterprises we had to fine-tune a model that we can deploy that is secure and private.

We're very proud of the work we recently did, and wanted to share it with the greater HN community. We'd love to hear your feedback and thoughts. Let me know if I can clarify anything in particular.

zwaps|1 year ago

It sounds like you are training multiple low rank adapters?

Mountain_Skies|1 year ago

Finding SQL Injection is pretty trivial for SAST tools. The difficulty is what happens next. After whatever tool finds several thousand SQLI vulns in a Cold Fusion application from 2001 that hasn't been touched in over a decade, someone must be identified to take responsibility for changing the code, testing it, and deploying it. Even if the tool can change the code, no one will want to take responsibility for changes made to an application that has quietly running correctly since before most of their department has been at the company using an ancient technology that no one has experience with deploying into production. This is where so many vulns live.

Shift left and modern development patterns can catch a very large amount of known vulns so in newer applications things become mostly about fixing newly discovered vulns and doing it in an active development cycle. It's the older code that's the real scary monster and identifying the vulns is the least scary part of the process to get them remediated and put into production.

Anything that reduces false positives is good, especially if it does so without also making a significant reduction in identified true positives, but none of that changes the fact that it is the low hanging fruit of the system.

asadeddin|1 year ago

Totally agree. We have a term for it "Dev confidence". Devs really don't want to touch something that's been working for a long time, especially in a codebase they're not familiar with. The more removed the dev from the code they're working on + the length it's been running, the lower their confidence. We built in mechanisms to do a number of checks on our fixes to try to our best ability to make sure something doesn't break.

On false positives, we introduced false positive detection using AI & static analysis because of the exact issue you're highlighting.

bigiain|1 year ago

What an awesome way of finding companies who suspect their code is insecure, and then having them give you their source code. And _charging_ them for it, presumably to make it an easier sell to CXOs: 'Nah, it's not those free software hippy communists, they're gonna make you pay through the nose for this, like a _proper_ compliance checkbox ticking outsourced vendor!"

I wonder if this is an NSA front? Or Palintir maybe? Or NSO?

Mountain_Skies|1 year ago

The best companies to hit would be those foolish enough to not suspect their code is insecure because all software development produces vulns. Off prem scanning is a big issue in the AppSec space and vendors handle it in various ways, mostly through promises and documented processes, neither of which mean much if the vendor is a front for an intelligence agency or had otherwise been captured.

There are some free tools out there but most do lag behind the industry as a whole by quite a bit. There's also lots of abandoned free tools out there cluttering up the space. Plenty started with good intentions that now give a false sense of security. There's also lots of snake oil in the paid space. Doing one's homework really helps here and you'd be surprised how many tools fail miserably during a simple proof of concept test, which is probably why more and more vendors try to avoid them.

GoblinSlayer|1 year ago

Whose code do you think is secure?

WalterBright|1 year ago

> an SQL injection vulnerability

I simply do not understand why the SQL API even allows injection vulnerability. Adam Ruppe and Steven Schweighoffer have done excellent work in writing a shell API over it (in D) that makes such injections far more difficult to inadvertently write.

On airplanes, when a bad user interface leads to an accident, the user interface gets fixed. There's no reason to put up with this in programming languages, either.

_jhqp|1 year ago

> why the SQL API even allows injection vulnerability

How would one implement this?

"SQL APIs" use prepared statements. Meaning you have a string for SQL and some dynamic variables that inject into that string via $1, $2 etc.

BUT now if developer makes that string dynamic via a variable, then you have SQL injection again.

asadeddin|1 year ago

I agree. It would be nice if most SQL API's were secure by default to prevent SQLI. It's really something that the db connectors in the programming languages should handle with more grace like most ORMs today handle them pretty well.

I believe it largely is due to how SQL is designed to allow multiple queries to be concatenated with each other, and poor logic design when writing such queries.

tptacek|1 year ago

In virtually every dev environment, the overwhelming majority of queries are most straightforwardly written in a way that doesn't admit to SQLI. It's not really a programming language thing.

notepad0x90|1 year ago

The vulnerability class is hardly unique to sql. any program that constructs content to be processed by another program or sub-routine, where an attacker can control the content has the potential to exhibit such a vulnerability. A good example is format strings in C or cgi-scripts that call each other or run OS commands.

otabdeveloper4|1 year ago

> the SQL API

No such thing.

sachahjkl|1 year ago

let me introduce you to the much better and reliable world of: static analysis

hashtag-til|1 year ago

I feel we're going to have a hard time over the next months with a stream of these "magic tools" to solve already solved problems and try to milk some money out off managers who got no clue.

asadeddin|1 year ago

I would redefine it a bit.

Reliable = deterministic

Accurate? Not at all. Studies show that ~30% of findings are false positive. We've also seen that with the companies we work with because we built a false positive detection feature in Corgea. There's another ~60% of issues that are false negative. https://personal.utdallas.edu/~lxz144130/publications/icst20...

We combine static analysis + LLMs to do better detection, triaging and auto-fixing because static analysis alone is broken in many ways.

xrd|1 year ago

I was ready to sign up after I read the article. But, when I click on the button at the bottom ("Ready to fix with a click?"), nothing happens. After open dev tools, I can see it registers the click with a linkedin ad tracker network event, but nothing happens. Maybe Firefox blocking?

jgalt212|1 year ago

maybe. I've had more and more issues with Firefox under Linux lately.

vouaobrasil|1 year ago

These small incremental AI tools seem in isolation to be helpful things for human coders. But over a period of decades, these interations will eventually become mostly autonomous, writing code by themselves and without much human intervention compared to now. And that could be a very dangerous thing for humanity, but most people working on this stuff don't care because by the time that happens, they will be retired with a nice piece of private property that will isolate them from the suffering of those who have not yet obtained their private property.

xyproto|1 year ago

If the danger is a high degree of inequality among humans on Earth, we are already there.

EGreg|1 year ago

Exactly. And it won’t isolate them btw. The AI will affect them too.

nodeshiftcloud|1 year ago

we find the idea of fine-tuning an LLM to triage and fix insecure code intriguing. However, we have concerns about the limitations posed by the size of the training dataset. As @tptacek mentioned, relying on "hundreds of closed source projects" might not provide the diversity needed to effectively identify a wide range of vulnerabilities, especially in complex systems like the Linux kernel. Incorporating open-source projects could enrich the model's understanding and improve its accuracy. Additionally, benchmarking the model by attempting to generate CVEs from open-source code seems like a practical way to assess its real-world effectiveness. Has anyone experimented with expanding the training data or testing the model against known vulnerabilities in open-source repositories?

asadeddin|1 year ago

That's what we've done. Unfortunately, I realized the sentence reads weirdly. It's meant to say we use hundreds of repositories: close-source projects we own + open-source projects that are vulnerable by design + open source projects. I've updated the language in the post.

Doing so, we've been able to capture a very wide range of vulnerabilities namely in web application vulnerabilities. We've done this across small projects to very large ones too.