top | item 25612429

CVE Stuffing

290 points| CapacitorSet | 5 years ago |jerrygamblin.com

102 comments

order

MrStonedOne|5 years ago

Way back when I saw a report on hackernews about secret exposure from websites that deployed directly via a git repo as a webroot and didn't block access to .git/

I added a cheeky message to my site's .git/ folder if you attempted to view it.

About 2 or 3 months later I started getting "security reports" to the catch all, about an exposed git folder that was leaking my website's secrets.

Apparently because my site didn't return 404, their script assumed i was exposed and they oh so helpfully reported it to me.

Got like 4 or 5 before i decided to make it 404 so they would stop, mainly because i didn't want to bring false positive fatigue on to "security exploit" subject line emails.

I have a feeling CNAs are bringing this kind of low effort zero regard for false positive fatigue bullshit to CVEs. Might as well just rip that bandaid off now and stop trusting anything besides the debian security mailing list.

Shank|5 years ago

This is quite common. If you run a security@ mailbox at a company, you're bound to receive hundreds of bug bounty/responsible disclosure requests because of known software quirks or other design choices. They'll cite precisely one CVE or HackerOne/BugCrowd report, and then proceed to demand a huge payment for a critical security flaw.

I've seen reports that easily fail the airtight hatchway [0] tests in a variety of ways. Long cookie expiration? Report. Any cookie doesn't have `Secure`, including something like `accepted_cookie_permissions`? Report. Public access to an Amazon S3 bucket used to serve downloads for an app? Report. WordPress installed? You'll get about 5 reports for things like having the "pingback" feature enabled, having an API on the Internet, and more.

The issue is that CVEs and prior-art bug bounty payments seem "authoritative" and once they exist, they're used as reference material for submitting reports like this. It teaches new security researchers that the wrong things are vulnerabilities, which is just raising a generation of researchers that look for the entirely wrong things.

[0]: https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31...

csnover|5 years ago

Be thankful you only receive automated security reports about an open .git directory. There is some guy/company who goes around running a web spider connected to some shitty antivirus which automatically submits false abuse reports to site ISPs claiming that their customers are hosting viruses. This happened to me twice; I think after the second time my ISP started rejecting these reports outright since I haven’t seen any new ones for a few years now, even though they’re clearly still at it (or, maybe, finally stopped last year after getting DDoSed?)[0].

Automated security scanning by people who don’t know what they are doing has become an enormous hassle in so many ways and really is damaging the ability to find and handle true threats.

[0] https://twitter.com/badlogicgames/status/1267850389942042625

cperciva|5 years ago

Speaking of "security exploits" consisting of reading publicly available information: Tarsnap has public mailing lists with public mailing list archives, and at least once a month I get an email warning me that my "internal emails" are accessible.

cipherboy|5 years ago

> I have a feeling CNAs are bringing this kind of low effort zero regard for false positive fatigue bullshit to CVEs. Might as well just rip that bandaid off now and stop trusting anything besides the debian security mailing list.

Red Hat (my employer), Canonical, and SUSE are also CNAs. I can only speak to ours, but I think our prodsec team does a great job with the resources they've been given. Nobody is perfect, but if you take the time to explain the problem (invalid CVE, wrong severity, bad product assignment, ...) they consistently take the time to understand the issue and will work with whatever other CNA or reporter to fix it. Generally we have a public tracker for unembargoed CVEs, so if it affects us and isn't legitimate or scoped correctly, you might get somewhere by posting there (or the equivalent on Ubuntu/SUSE's tracker).

Perhaps it is just the nature of the open source community Linux distros are a part of, though, that lets them apply it to CVEs as well.

Doesn't help with personal reports though. :-)

Curious, did you get CVE assignments against your personal site? 0.o

thaumasiotes|5 years ago

> I have a feeling CNAs are bringing this kind of low effort zero regard for false positive fatigue bullshit to CVEs.

Yes, being the discoverer of a CVE is a major resume item. Pen testers who have a CVE to their name can charge more. Companies can charge more for sending them.

pixl97|5 years ago

Is there a way to return a custom 404 error handler for .git and a different one for a regular 404 in Apache? Never tried that before.

jeltz|5 years ago

How do they contact you? I have never got any report.

seanwilson|5 years ago

> Apparently because my site didn't return 404, their script assumed i was exposed and they oh so helpfully reported it to me.

There's no good reason that folder should exist except for a joke, so how is this not a helpful message in the vast majority of cases? All lint rules have exceptions, doesn't make them not useful.

bregma|5 years ago

I'm a command-line development tools maintainer for an OS. I am not unfamiliar with high-level CVEs in my inbox with the likes of "gdb crashes on a handcrafted core file causing a DoS". I am unfamiliar with a real world in which a simple old-fashioned segfault in a crash analysis tool is truly a denial of service security vulnerability, but our security department assures us we need to drop all revenue work and rush out a fix because our customers may already be aware that our product is shipping with a known CVE.

There are occasions in which I recognize a CVE as a vulnerability to a legitimate possible threat to an asset. By and large, however, they seem to be marketing material for either organizations offering "protection" or academics seeking publication.

I think like anything else of value, inflation will eat away at the CVE system until something newer and once again effective will come along.

raverbashing|5 years ago

Ah yes, this also fits with the famous "no insecure algorithms" in which an auditor will check a box if your use md5, even if for a feature totally unrelated to security.

easterncalculus|5 years ago

Lots of CVEs are illegitimate. You have people creating whole "vulnerabilities" that are just long known features of various technologies. The worst one I'm remembering is the "discovery" of "Zip Slip" and "ZipperDown", which were both just gotchas in the zip format that have been known about for decades now. Both got trendy websites just like Spectre and Meltdown, and loads of headlines. ZipperDown.org is now an online slots website.

- https://snyk.io/research/zip-slip-vulnerability

- http://phrack.org/issues/34/5.html#article

- https://www.youtube.com/watch?v=Ry_yb5Oipq0

grnd|5 years ago

Hi there. Danny here, co founder at Snyk and the guy behind the zip slip research. First, at no point we claimed that this is a new type of vulnerability, the contrary, in every talk i gave, most are on youtube, i started with saying that it's a 30yo vuln, originally published in phrack showing the actual phrack issue. Secondly, the real problem here is that 30 years later, in some languages like Java, more than 90% of archive extraction implementations are vulnerable to this issue, like really vulnerable, RCE kind of vulnerable. so no, this is not just a "zip format gotcha", this is a real issue in real apps. this is the kind of vulnerability that every security person knows of, but not that many developers do. when they write extraction code, they most often do it without considering the path traversal issues. Some languages solved it by providing a simple api for you to extract an archive, like python's zipfile.extractall(), this is great! but others like java stayed behind and made the developers either write it themselves (wrongly) or copy and paste it from stackoverflow (most answers are vulnerable).. fast forward 30 years, still too many vulnerable apps (we identified several hundreds) that are vulnerable. since this is an issue of awareness, we thought it would be good to have a better name, just like "zip bomb" is well know, zip slip should be too. neither are zip only issues (others archivers and compressors are affected), but both make it simple to remember. anyways.. looking back it's very easy to see the impact of such research, i'm not talking about snyk's marketing and such, i'm taking about hundreds of open source projects fixing the issue (maintainers confirming it), CVEs assigned, many developes learning about it (blog post, talks, etc). peace

whydoyoucare|5 years ago

I believe the ZipSlip was/is a marketing effort for snyk.

dx87|5 years ago

I think this goes hand-in-hand with people naming security vulnerabilities and trying to make it a big spectacle. Sometimes it is a legit serious vulnerability, like shellshock or heartbleed, but a lot are just novices trying to get their 15 minutes of fame. I remember a few years back there was a "vulnerability" named GRINCH, where the person who discovered it claimed it was a root priviledge escalation that worked on all versions of Red Hat and CentOS. They made a website and everything for it, and tried to hype it up before disclosing what it was. Turns out the "vulnerability" was members of the wheel group being able to use sudo to run commands as root.

tptacek|5 years ago

It's hard for me to think of a serious downside for named vulnerabilities. People who try to name sev:lo bugs get made fun of; it backfires.

jart|5 years ago

I remember when people in the security community started filing CVEs against the TensorFlow project, claiming that code execution was possible with a handcrafted TensorFlow graph, and the team would have to try and explain, "TensorFlow GraphDefs are code".

belval|5 years ago

The whole situation around CVE in Tensorflow is very painful, you get GitHub security notifications for any public repository using TF because of a "known CVE", even though it's basically just a train.py script that is not deployed anywhere.

tptacek|5 years ago

I understand the frustration, and I'm pretty sure the root cause is straightforward ("number of CVEs generated" is a figure of merit in several places in the security field, especially resumes, even though it is a stupid metric).

But the problem, I think, contains its own solution. The purpose of CVEs is to ensure that we're talking about the same vulnerability when we discuss a vulnerability; to canonicalize well-known vulnerabilities. It's not to create a reliable feed of all vulnerabilities, and certainly not as an awards system for soi-disant vulnerability researchers.

If we stopped asking so much from CVEs, stopped paying attention to resume and product claims of CVEs generated (or detected, or scanned for, or whatever), and stopped trying to build services that monitor CVEs, we might see a lot less bogus data. And, either way, the bogus data would probably matter less.

(Don't get me started on CVSS).

currymj|5 years ago

this sounds similar to problems with peer review in academia. it mostly works fine as a guardrail to enforce scholarly norms.

however many institutions want to outsource responsibility for their own high-stakes decisions to the peer review system. whether it's citing peer-reviewed articles to justify policy, or counting publications to make big hiring decisions.

It introduces very strong incentives to game the system -- now getting any paper published in a decent venue is very high-stakes, and peer review just isn't meant for that -- it can't really be made robust enough.

i don't know what the solution is in situations like this, other than what you propose -- get the outside entities to take responsibility for making their own judgments. but that's more expensive and risky for them, so why would they do it?

It feels kind of like a public good problem but I don't know what kind exactly. The problem isn't that people are overusing a public good, but that just by using it at all they introduce distorting incentives which ruins it.

hannob|5 years ago

The whole problem is that at some point people started seeing CVEs as an achievement, as "if I get a CVE it means I found a REAL VULN". While really CVEs should just be seen as an identifier. It means multiple people talking about the same vuln know they're talking about the same vuln. It means if you read an advisory about CVE-xxx-yyy you can ask the vendor of your software if they already have a patch for that.

It simply says nothing about whether a vuln is real, relevant or significant.

saagarjha|5 years ago

This is also annoying because if you ask for a CVE you can get placed in the bucket with people who are just looking for a thing they can talk about, when in fact you’d like to make the bug searchable to other people.

smsm42|5 years ago

I feel this is the consequence of paying people for security bugs reporting (and only security bugs reporting). People start to inflate the number of reports and no longer care about proper severity assignment as long as it get them that coveted "security bug" checkbox. I mean I can see how bounty programs and projects like hackerone can be beneficial, but this is one of the downsides of it.

CNA system actually is better since it at least puts some filter on it - before it was Wild West, anybody could assign CVE to any issue in any product without any feedback from anybody knowledgeable in the code base and assign any severity they liked, which led to wildly misleading reports. I think CNA at least provides some sourcing information and order to it.

brohee|5 years ago

Didn't check who filled those bugs, but I've seen companies requiring having discovered CVE to apply for some jobs, and the natural consequence is gaming the system...

haukem|5 years ago

How to mark a CVE as invalid or request an update? I tried the Update Published CVE process, but nothing happened not even a reject, just no answer. Multiple CVEs where reported to OpenWrt which are invalid, but we (OpenWrt team) haven't found out how to inform Mitre.

For example CVE-2018-11116: Someone configures an ACL to allow everything and then code executing is possible like expected: https://forum.openwrt.org/t/rpcd-vulnerability-reported-on-v...

and CVE-2019-15513: The bug was fixed in OpenWrt 15.05.1 in 2015: https://lists.openwrt.org/pipermail/openwrt-devel/2019-Novem...

For both CVEs we were not informed, the first one someone asked in the OpenWrt forum about the details of this CVE and we were not even aware that there is one. The second one I saw in a public presentation from a security company mentioning 4 CVEs on OpenWrt and I was only aware of 3.

When we or a real security researcher request a CVE for a real problem as an organization it often takes weeks till we get it, we released some security updates without a CVE, because we didn't want to wait so long. It would also be nice to update them later to contain a link to our detailed security report.

jlgaddis|5 years ago

> When we or a real security researcher request a CVE for a real problem as an organization it often takes weeks till we get it, we released some security updates without a CVE, because we didn't want to wait so long.

From your point of view, I'm sure that's probably quite frustrating. From my point of view (as a user), that's completely absurd, should never happen, and is a huge deficiency in the CVE program.

Fortunately, it's possible for the OpenWRT project to become a CNA [0] and gain the ability to assign CVE IDs themselves.

See "Types" under "Key to CNA Roles, Types, and Countries" [1]:

> Vendors and Projects - assigns CVE IDs for vulnerabilities found in their own products and projects.

--

[0]: https://cve.mitre.org/cve/cna.html#become_a_cna

[1]: https://cve.mitre.org/cve/request_id.html#key_cna_roles_and_...

_kbh_|5 years ago

I would email MITRE responding to your own email that they haven't responded to, after a couple months. I had to request a status update nearly two months later to get a response once, I suspect they are busy.

RyJones|5 years ago

We get dozens of "high-priority" security issues filed that are resolved with "we're an open-source project; this information is public on purpose".

Our bug bounty clearly outlines that chat, Jira, Confluence, our website - all out-of-bounds. Almost all of our reports are on those properties.

eyeareque|5 years ago

Mitre is a us gov supported team, and previously they could not scale to the need of their efforts. They did the best they could, but they still had a lot of angry people out there. The whole world uses CVEs but it is US funded by the way.

In come new CNAs, scale the efforts through trusted teams, which makes sense. The mitre team can only do so much on their own.

Unfortunately I don’t think anyone will be as strict and passionate about getting CVEs done right, like the original mitre team has.

Here is to hoping they can revoke cna status from teams who consistently do not meet a quality bar.

tamirzb|5 years ago

The problem though is that issues with CVEs are not caused only by bad CNAs. MITRE (understandably) doesn't have the resources to verify every CVE request it receives, which have resulted in bad CVE details being filed on multiple occasions.

I wonder if maybe, instead of trying to fix CVEs, we could try to think about creating alternatives? I know some companies already use their own identifiers (e.g. Samsung with SVE), so perhaps a big group of respected companies can come together to create a new unified identifier? Just an idea though.

jebronie|5 years ago

A security auditor once reported a Adobe generator comment in an SVG file as a moderate "version leak vulnerability" to me.

smsm42|5 years ago

This is a staple of audit report stuffing. Somebody got an idea that disclosing a version of anything anywhere is a huge security hole, so now any publicly visible version string generates a "moderate" (they are usually not as brazen as to call it "critical") security report.

DiabloD3|5 years ago

So... the real question is, why are CVEs that are just packages of software being accepted to the CVE database anyways? If its in a Docker image, it should be immediately rejected: report the CVE for the precise upstream project instead.

jlgaddis|5 years ago

> ... why are CVEs that are just packages of software being accepted to the CVE database anyways?

Ultimately, because there are now a few hundred [0] CNAs [1] which are "authorized to assign CVE IDs" and, AFAICT, there is nothing in the "CNA rules" [2] that requires them to (attempt to) verify the (alleged) vulnerabilities -- although, in at least some instances, I assume it simply wouldn't be possible for them to do so.

--

> 7.1 What Is a Vulnerability?

> The CVE Program does not adhere to a strict definition of a vulnerability. For the most part, CNAs are left to their own discretion to determine whether something is a vulnerability. [3]

Officially, a "vulnerability" is:

> A flaw in a software, firmware, hardware, or service component resulting from a weakness that can be exploited, causing a negative impact to the confidentiality, integrity, or availability of an impacted component or components.

Fortunately, there is a "Process to Correct Assignment Issues or Update CVE Entries" [5]. In instances of multiple, "duplicate" or "invalid" CVEs, I can see how this might be both frustrating and time-consuming for software developers, though.

--

[0]: https://cve.mitre.org/cve/request_id.html

[1]: https://cve.mitre.org/cve/cna.html

[2]: https://cve.mitre.org/cve/cna/rules.html

[3]: https://cve.mitre.org/cve/cna/rules.html#section_7-1_what_is...

[4]: https://cve.mitre.org/about/terminology.html#vulnerability

[5]: https://cve.mitre.org/cve/cna/rules.html#appendix_c_process_...

Macha|5 years ago

What if the project is the docker image? What if the docker image is the primary distribution method of the software?

fractionalhare|5 years ago

That sucks. Perhaps the most annoying part of modern infosec is the absolute deluge of noise you get from scanning tools. Superfluous CVEs like this contribute to the sea of red security engineers wake up to when they look at their dashboards. Unsurprisingly, these are eventually mostly ignored.

Every large security organization requires scanning tooling like Coalfire, Checkmarx, Fortify and Nessus, but I've rarely seen them used in an actionable way. Good security teams come up with their own (effective) ways of tracking new security incidents or vastly filtering the output of these tools.

The current state of CVEs and CVE scanning is that you'll have to wrangle with bullshit security reports if you run any nontrivial software. This is especially the case if you have significant third party JavaScript libraries or images. And unfortunately you can't just literally ignore it, because infrequently one of those red rows in the dashboard will actually represent something like Heartbleed.

mnd999|5 years ago

> The current state of CVEs and CVE scanning is that you'll have to wrangle with bullshit security reports if you run any nontrivial software.

Especially if you have customers who outsourced their infosec to the lowest bidder who insist every BS CVE is critical and must be fixed.

futevolei|5 years ago

The non stop stream of emails every day certainly sucks but falls far short of my employers false positive process which requires several emails explaining why it’s false positive and following up to make sure the waiver is applied so as to not impact our security rating instead of just reassigning the jira ticket and adding false positive label.

bartread|5 years ago

We use Nessus and it's not too bad on the false positive front. I usually check the scan results every week or two to see if it finds anything new, and I know our Head of IT also keeps an eye on them. In an ideal world we'd automate this away but have a raft of more pressing priorities.

We also use tools like Dependabot to keep an eye out for vulnerabilities in our dependencies, and update them to patched versions. This is genuinely useful and a worthwhile timesaver on more complex projects.

It's easy to be cynical about automated scanning (and pen-testing for that matter) and, although it's often needed as a checkbox for certification, it can certainly add value to your development process.

hendry|5 years ago

Communication breakdown.

It's a bit naughty how "security researchers" don't appear to make a good effort to communicate upstream.

And the fact that Jerry has problems reaching out to NVD or Mitre is worrying.

lmilcin|5 years ago

CVE DoS -- post so many CVEs to paralyze the system completely.