I checked the stack overflow that was marked High, and Fil-C prevents that one.
One of the out-of-bounds writes is also definitely prevented.
It's not clear if Fil-C protects you against all of the others (Fil-C won't prevent denial of service, and that's what some of these are; Fil-C also won't help you if you accidentally didn't encrypt something, which is what another one of these bugs is about).
The one about forgetting to encrypt some bytes is marked Low Severity because it's an API that they say you're unlikely to use. Seems kinda believable but also ....... terrifying? What if someone is calling the AESNI codepath directly for reasons?
Here's the data about that one:
"Issue summary: When using the low-level OCB API directly with AES-NI or
other hardware-accelerated code paths, inputs whose length is not a multiple
of 16 bytes can leave the final partial block unencrypted and unauthenticated.
Impact summary: The trailing 1-15 bytes of a message may be exposed in
cleartext on encryption and are not covered by the authentication tag,
allowing an attacker to read or tamper with those bytes without detection."
I suspect this year we are going to see a _lot_ more of this.
While it's good these bugs are being found and closed, the problem is two fold
1) It takes time to get the patches through distribution
2) the vast majority of projects are not well equipped to handle complex security bugs in a "reasonable" time frame.
2 is a killer. There's so much abandonware out there, either as full apps/servers or libraries. These can't ever really be patched. Previously these weren't really worth spending effort on - might have a few thousand targets of questionable value.
Now you can spin up potentially thousands of exploits against thousands of long tail services. In aggregate this is millions of targets.
And even if this case didn't exist it's going to be difficult to patch systems quickly enough. Imagine an adversary that can drip feed zero days against targets.
Not really sure how this can be solved. I guess you'd hope that the good guys can do some sort of mega patch against software quicker than bad actors.
But really as the npm debacle showed the industry is not in a good place when it comes to timely secure software delivery even without millions of potential new zero days flying around.
No, the biggest problem at the root of all this is complexity. OpenSSL is a garbled mess. No matter AI or not, such software should not be the security backbone of the internet.
People writing and maintaining software need to optimize for simplicity, readibility, maintainability. Whether they use an LLM to achieve that is seconday. The humans in the loop must understand what's going on.
> 2 is a killer. There's so much abandonware out there, either as full apps/servers or libraries. These can't ever really be patched. Previously these weren't really worth spending effort on - might have a few thousand targets of questionable value.
It's worse than that. In before, operator of a system could upgrade distro's openssl version, restart service and it was pretty much done. Even if it was 3rd party vendor app at the very least you can provide security updates for the shared libs
Nowadays, where everything runs containers, you now have to make sure every single vendor you take containers from did that update
> Finding a genuine security flaw in OpenSSL is extraordinarily difficult.
history suggests otherwise
> The fact that 12 previously unknown vulnerabilities could still be found there, including issues dating back to 1998, suggests that manual review faces significant limits, even in mature, heavily audited codebases.
no, the code is simply beyond horrible to read, not to mention diabolically bad
if you've never tried it, have a go, but bring plenty of eyebleach
If someone meant to engineer a codebase to hide subtle bugs which might be remotely exploitable, leak state, behave unexpectedly at runtime, or all of the above, the code would look like this.
It really is just a collection of several dozen research grade implementations for algorithms + a small handful of load bearing algorithms for the entire internet. Surprisingly, OpenSSL isn't the only critical piece of internet architecture like this.
We don't know how to secure C codebases by manual review. It's been well known to security engineering people for decades. And has been wider industry and academic consensus for a long time. It's like "is man-made climate change real".
(We don't know how to secure other codebases either, but C is harder since its memory safety story is like a chainsaw juggling act so code has classes of vulnerabilities that other languages don't and this eats a lot of the attention).
I can read C/C++ code about as well as I can read German. Bits and pieces make sense but I definitely don’t get the subtleties.
What’s eye bleachy about this beyond regular C/C++?
For context I’m fluent in C#/javascript/ruby and generally understand structs and pointers although not confident in writing performant code with them.
"We submitted detailed technical reports through their coordinated security reporting process, including complete reproduction steps, root cause analysis, and concrete patch proposals. In each case, our proposed fixes either informed or were directly adopted by the OpenSSL team."
I don't know why you're still using OpenSSL but if you're able to switch I note that BoringSSL was not affected by any of the January 2026 OpenSSL advisories, and was also not affected by any of the advisories from 2025, and was affected by only one of the 2024 advisories. I also note that I don't see any hasty commit activity to s2n-tls that looks like a response to these advisories.
I like to recommend that project because it has a very transparent vulnerabilities approach, and is in my opinion written a lot more sane than OpenSSL which is somewhat not using standard C features because it always implements everything from scratch like a kernel does.
But yeah, anyways, WolfSSL comes from the embedded area in case that's your thing.
The sad reality is that if your code is available for free and works most of the time, nothing else matters. I'm not sure I would call it "product success" given that OpenSSL's income is enough to cover, like, one dude in a LCOL country some of the time.
OpenSSL is a very odd codebase, it's grown by accretion, under many stewards, with several flavours of coding belief, over time from SSLEAY which Eric Young coded over 2 decades ago. It had chip-specific speedups from the days of the Intel 486.
I was part of a body which funded work to include some stuff in the code, and the way you take something like X509 and incorperate a new ASN.1 structure inside the code, to be validated against conformance requirements (so not just signing blindly over the bitstream, but understanding the ASN.1 and validating it has certain properties about what it says, like not overlapping assertions of numeric ranges encoded in it) is to invoke callouts from deep down, to perform tasks and then return state. You basically seem to have to do about a 5 layer deep callout and return. It's a massive wedding cake of dependency on itself, it personifies the xkcd diagram of "...depends on <small thing>" risks.
I'm not surprised people continue to find flaws. I would like to understand if this approach also found flaws in e.g. libsodium or other more modern crytography, or in the OpenBSD maintained libreSSL code (or whatever it is) or Peter Gutmann's code.
The title change from "AISLE" to "AI" is misleading. As the article states,
> This doesn't mean that AI can replace human expertise. The OpenSSL maintainers' deep knowledge of the codebase was essential for validating findings and developing robust fixes. But it does change the SLA of security. When autonomous discovery is paired with responsible disclosure, it collapses the time-to-remediation for the entire ecosystem.
They don't appear to go into detail about anything except how great it is that they found the bugs, what those bugs were, and how rare it is for other people to find bugs.
I think that it would be helpful from a research point of view to know what sort of noise their AI tool is generating, but, because they appear to be trying to sell the service, they don't want you to know how many dev months you will lose chasing issues that amount to nothing.
Even if it does have false positives, I expect it would make a nicer starting point for finding and verifying bugs/vulnerabilities, compared to wading through the entire codebase until you find something. Even if it is a false positive, it would probably be due to sketchy looking code (hopefully, unless it hallucinated completely new code) that you can take a look at, and maybe spot something else that the AI didn't catch.
Besides the HN submission, XBOW and Hacktron AI has found plenty of vulnerabilities in code.
I don’t want to discredit the authors but just want to offer couple of hypothetical points in these paranoid times.
From a marketing angle, for a startup whose product is an AI security tool, buying zero-days from black market and claiming the AI tool found them might be good ROI. After all this is making waves.
Or, could it be possible the training set contains zero-day vulnerabilities known to three-letter agencies and other threat actors but not to public?
These two are not mutually exclusive either. You could buy exploits and put them in the training set.
Does anyone have any recommendations on best practice security methods? As others have said, it sounds like there may be an order of magnitude more vulnerabilities found / exploited, and I'm wondering if security such as 2FA and Password Managers will be enough? Should people be getting on board with other protections such as security keys?
Same as for people. You establish what the threat model is and then have multiple approaches. For example going through all interesting operations, tracking down their inputs and data flow, then looking for edge cases along the way. If you have enough time / tokens, this becomes more of a spreadsheet/checklist exercise. The more experience you have, the better you can prioritise that list towards paths that are more likely to be disrupted.
Like any powerful tool, used responsibly in the right hands it could lead to great good; in the wrong hands or used irresponsibly, it could be extremely dangerous.
The fun thing to me here is that a ton of really creative thinkers are going to have access to tools (LLM agents) that allow them to test their thinking quickly. I dearly hope that this leads to a prolonged phase of pain and loss.
We made good choices when we decided the information on the internet should be delivered by simple, open protocols.
We made bad choices when we decided that the information on the internet didn't need to be verified, or verifiable.
Then we slipped on our good choices, because our bad choices let robber barons claim the verified or verifiable case.
And then we were left an explosive entropy shit-pile.
But now the new tools the new overlords are paying us to use will help us break free from their shackles, bwahahahahahahahahahahahah!!!!
Link seems to be down...
But also, considering curl recently shut down its bug bounty program due to AI spam, this doesn't really inspire much confidence.
pizlonator|1 month ago
I checked the stack overflow that was marked High, and Fil-C prevents that one.
One of the out-of-bounds writes is also definitely prevented.
It's not clear if Fil-C protects you against all of the others (Fil-C won't prevent denial of service, and that's what some of these are; Fil-C also won't help you if you accidentally didn't encrypt something, which is what another one of these bugs is about).
The one about forgetting to encrypt some bytes is marked Low Severity because it's an API that they say you're unlikely to use. Seems kinda believable but also ....... terrifying? What if someone is calling the AESNI codepath directly for reasons?
Here's the data about that one:
"Issue summary: When using the low-level OCB API directly with AES-NI or other hardware-accelerated code paths, inputs whose length is not a multiple of 16 bytes can leave the final partial block unencrypted and unauthenticated.
Impact summary: The trailing 1-15 bytes of a message may be exposed in cleartext on encryption and are not covered by the authentication tag, allowing an attacker to read or tamper with those bytes without detection."
arcfour|1 month ago
Although I agree in principle it is quite scary!
martinald|1 month ago
I suspect this year we are going to see a _lot_ more of this.
While it's good these bugs are being found and closed, the problem is two fold
1) It takes time to get the patches through distribution 2) the vast majority of projects are not well equipped to handle complex security bugs in a "reasonable" time frame.
2 is a killer. There's so much abandonware out there, either as full apps/servers or libraries. These can't ever really be patched. Previously these weren't really worth spending effort on - might have a few thousand targets of questionable value.
Now you can spin up potentially thousands of exploits against thousands of long tail services. In aggregate this is millions of targets.
And even if this case didn't exist it's going to be difficult to patch systems quickly enough. Imagine an adversary that can drip feed zero days against targets.
Not really sure how this can be solved. I guess you'd hope that the good guys can do some sort of mega patch against software quicker than bad actors.
But really as the npm debacle showed the industry is not in a good place when it comes to timely secure software delivery even without millions of potential new zero days flying around.
teiferer|1 month ago
No, the biggest problem at the root of all this is complexity. OpenSSL is a garbled mess. No matter AI or not, such software should not be the security backbone of the internet.
People writing and maintaining software need to optimize for simplicity, readibility, maintainability. Whether they use an LLM to achieve that is seconday. The humans in the loop must understand what's going on.
MBCook|1 month ago
Let’s see them to do this on projects with a better historical track record.
PunchyHamster|1 month ago
It's worse than that. In before, operator of a system could upgrade distro's openssl version, restart service and it was pretty much done. Even if it was 3rd party vendor app at the very least you can provide security updates for the shared libs
Nowadays, where everything runs containers, you now have to make sure every single vendor you take containers from did that update
pjmlp|1 month ago
Even if not all logic errors can be prevented, some of them keep happening by using the wrong tools.
semiquaver|1 month ago
CharlesW|1 month ago
charcircuit|1 month ago
AI can automatically handle security reports.
blibble|1 month ago
history suggests otherwise
> The fact that 12 previously unknown vulnerabilities could still be found there, including issues dating back to 1998, suggests that manual review faces significant limits, even in mature, heavily audited codebases.
no, the code is simply beyond horrible to read, not to mention diabolically bad
if you've never tried it, have a go, but bring plenty of eyebleach
timschmidt|1 month ago
If someone meant to engineer a codebase to hide subtle bugs which might be remotely exploitable, leak state, behave unexpectedly at runtime, or all of the above, the code would look like this.
lumost|1 month ago
cryptonector|1 month ago
The methodology for developing and maintaining codebases like OpenSSL has changed!
> no, the code is simply beyond horrible to read, not to mention diabolically bad
OpenSSL? Parts of it definitely are, yes. It's better since they re-styled it. The old SSLeay code was truly truly awful.
fulafel|1 month ago
(We don't know how to secure other codebases either, but C is harder since its memory safety story is like a chainsaw juggling act so code has classes of vulnerabilities that other languages don't and this eats a lot of the attention).
rzerowan|1 month ago
Would be interesting to see if any of those found exist there.
snvzz|1 month ago
We are still suffering from that mistake, and LibreSSL is well-maintained and easier to migrate to than it ever was.
What the hell are we waiting for?
Is nobody at Debian, Fedora or Ubuntu able to step forward and set the direction?
nextaccountic|1 month ago
Why not start from a clean slate? Companies like Google could afford it
lovich|1 month ago
What’s eye bleachy about this beyond regular C/C++?
For context I’m fluent in C#/javascript/ruby and generally understand structs and pointers although not confident in writing performant code with them.
assanineass|1 month ago
[deleted]
hnmullany2|1 month ago
[deleted]
dnw|1 month ago
This sounds like a great approach. Kudos!
jeffbee|1 month ago
Better software is out there.
cookiengineer|1 month ago
I like to recommend that project because it has a very transparent vulnerabilities approach, and is in my opinion written a lot more sane than OpenSSL which is somewhat not using standard C features because it always implements everything from scratch like a kernel does.
But yeah, anyways, WolfSSL comes from the embedded area in case that's your thing.
[1] https://www.wolfssl.com/
[2] https://github.com/wolfssl/wolfssl
mvkel|1 month ago
More evidence that "coding elegance" is irrelevant to a product's success, which bodes well for AI generated code.
not_a_bot_4sho|1 month ago
The unexpected part here being that AI brings specks of elegance to a terrible, inelegant codebase.
lmm|1 month ago
chris_wot|1 month ago
SkiFire13|1 month ago
kajaktum|1 month ago
ggm|1 month ago
I was part of a body which funded work to include some stuff in the code, and the way you take something like X509 and incorperate a new ASN.1 structure inside the code, to be validated against conformance requirements (so not just signing blindly over the bitstream, but understanding the ASN.1 and validating it has certain properties about what it says, like not overlapping assertions of numeric ranges encoded in it) is to invoke callouts from deep down, to perform tasks and then return state. You basically seem to have to do about a 5 layer deep callout and return. It's a massive wedding cake of dependency on itself, it personifies the xkcd diagram of "...depends on <small thing>" risks.
I'm not surprised people continue to find flaws. I would like to understand if this approach also found flaws in e.g. libsodium or other more modern crytography, or in the OpenBSD maintained libreSSL code (or whatever it is) or Peter Gutmann's code.
OpenSSL is a large target.
bandrami|1 month ago
cryptonector|1 month ago
It's also leading people to submit hallucinations as security vulns in open source. I've had to deal with some of them.
soulofmischief|1 month ago
tqk_x|1 month ago
So again this is not reproducible and everything is hidden behind an SaaS platform. That is apparently the future people want.
jibal|1 month ago
> This doesn't mean that AI can replace human expertise. The OpenSSL maintainers' deep knowledge of the codebase was essential for validating findings and developing robust fixes. But it does change the SLA of security. When autonomous discovery is paired with responsible disclosure, it collapses the time-to-remediation for the entire ecosystem.
aster0id|1 month ago
awesome_dude|1 month ago
I think that it would be helpful from a research point of view to know what sort of noise their AI tool is generating, but, because they appear to be trying to sell the service, they don't want you to know how many dev months you will lose chasing issues that amount to nothing.
tyre|1 month ago
It doesn't look like they had 1 AI run for 20 minutes and then 30 humans sift through for weeks.
b1temy|1 month ago
Besides the HN submission, XBOW and Hacktron AI has found plenty of vulnerabilities in code.
ape4|1 month ago
ChrisArchitect|1 month ago
AI discovers 12 of 12 OpenSSL zero-days (while curl cancelled its bug bounty)
https://www.lesswrong.com/posts/7aJwgbMEiKq5egQbd/ai-found-1...
yes_man|1 month ago
From a marketing angle, for a startup whose product is an AI security tool, buying zero-days from black market and claiming the AI tool found them might be good ROI. After all this is making waves.
Or, could it be possible the training set contains zero-day vulnerabilities known to three-letter agencies and other threat actors but not to public?
These two are not mutually exclusive either. You could buy exploits and put them in the training set.
I would not be surprised if it is legit though.
mnicky|1 month ago
Also, I don't think the three letter agencies would share one of the most prized assets they have...
crm9125|1 month ago
Without Humans, AI does nothing. Currently, at least.
adzm|1 month ago
baby|1 month ago
_JoRo|1 month ago
Thaxll|1 month ago
viraptor|1 month ago
panzi|1 month ago
ChrisArchitect|1 month ago
OpenSSL: Stack buffer overflow in CMS AuthEnvelopedData parsing
https://news.ycombinator.com/item?id=46782662
mrbluecoat|1 month ago
move-on-by|1 month ago
As for all the slop the Curl team has been putting up with, I suppose a fool with a tool is still a fool.
apexalpha|1 month ago
https://www.linkedin.com/posts/danielstenberg_vulnerabilitie...
wolfi1|1 month ago
portender|1 month ago
We made good choices when we decided the information on the internet should be delivered by simple, open protocols.
We made bad choices when we decided that the information on the internet didn't need to be verified, or verifiable.
Then we slipped on our good choices, because our bad choices let robber barons claim the verified or verifiable case.
And then we were left an explosive entropy shit-pile.
But now the new tools the new overlords are paying us to use will help us break free from their shackles, bwahahahahahahahahahahahah!!!!
rascul|1 month ago
TalkWithAI|1 month ago
[deleted]
ktimespi|1 month ago
M0dEx|1 month ago
[deleted]