Hacker News new | past | comments | ask | show | jobs | submit login
CVE Stuffing (jerrygamblin.com)
290 points by CapacitorSet on Jan 2, 2021 | hide | past | favorite | 102 comments



Way back when I saw a report on hackernews about secret exposure from websites that deployed directly via a git repo as a webroot and didn't block access to .git/

I added a cheeky message to my site's .git/ folder if you attempted to view it.

About 2 or 3 months later I started getting "security reports" to the catch all, about an exposed git folder that was leaking my website's secrets.

Apparently because my site didn't return 404, their script assumed i was exposed and they oh so helpfully reported it to me.

Got like 4 or 5 before i decided to make it 404 so they would stop, mainly because i didn't want to bring false positive fatigue on to "security exploit" subject line emails.

I have a feeling CNAs are bringing this kind of low effort zero regard for false positive fatigue bullshit to CVEs. Might as well just rip that bandaid off now and stop trusting anything besides the debian security mailing list.


This is quite common. If you run a security@ mailbox at a company, you're bound to receive hundreds of bug bounty/responsible disclosure requests because of known software quirks or other design choices. They'll cite precisely one CVE or HackerOne/BugCrowd report, and then proceed to demand a huge payment for a critical security flaw.

I've seen reports that easily fail the airtight hatchway [0] tests in a variety of ways. Long cookie expiration? Report. Any cookie doesn't have `Secure`, including something like `accepted_cookie_permissions`? Report. Public access to an Amazon S3 bucket used to serve downloads for an app? Report. WordPress installed? You'll get about 5 reports for things like having the "pingback" feature enabled, having an API on the Internet, and more.

The issue is that CVEs and prior-art bug bounty payments seem "authoritative" and once they exist, they're used as reference material for submitting reports like this. It teaches new security researchers that the wrong things are vulnerabilities, which is just raising a generation of researchers that look for the entirely wrong things.

[0]: https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31...


Yup, according to these "researchers" having robots.txt on your website is enough to warrant a CRITICAL vulnerability.

No, I'm not joking. That's one of the reports I saw in November. I've also had to triage the claim that our site supposedly has a gazillion *.tar.xz files available at the root. All because the 404 handler for random [non-production relevant] paths is a fixed page with 200 response.

As far as I'm concerned, running a bulk vulnerability scanner against a website and not even checking the results has as much to do with security research as ripping wings off of flies has to do with bioengineering.


Oh god. One client I work for does automated scans, and we had an s3 bucket set up as a static site.

They freaked out when /admin/ returned permission errors, essentially a 404, because it was information leakage about admin functions of the website.


That happens when you disable directory enumeration (or whatever name that has) on S3. In that case, it sends 403s (permission denied) instead of 404s.


I know, but try explaining that to someone in very small words. There is no admin. There is no login. The api has open CORS because we want reuse and there’s no risk because there’s literally no concept of identity in the app. Everything is public data or f(public).

Scanners see things through their eyes, and they’re not used to static/public.

I the end, It was easier just to rewrite 403 into 404.


Can confirm this; we've gotten more than 20 reports and demands for bounties for "public access" on our open data subdomain (backed by S3), which literally is `public.`.

Then they beg to have the report closed as "informative". We don't comply unless it really is an honest mistake; I don't like the idea of low-quality reporters evading consequences again and again, sending scattershot bug reports in a desperate attempt to catch a team not paying attention.


You're absolutely right I get a barrage of these. I've got to think someone out there is selling software to scan for these and spam them around.


One bad ([1]) side aspect of this that low signal to noise rate leads to fatigue. At some point it can lead to high priority information (in this case: real bug bounties) being missed. Instead of manually plowing through it you could automate declining obvious bogus information (such as spam) but it might lead to the same. Hence when sometimes real mail gets lost in spam folder.

[1] Arguably bad, depending on your interests. Because such can be intended by an adversary.


Thank you for sharing. Very useful.


Be thankful you only receive automated security reports about an open .git directory. There is some guy/company who goes around running a web spider connected to some shitty antivirus which automatically submits false abuse reports to site ISPs claiming that their customers are hosting viruses. This happened to me twice; I think after the second time my ISP started rejecting these reports outright since I haven’t seen any new ones for a few years now, even though they’re clearly still at it (or, maybe, finally stopped last year after getting DDoSed?)[0].

Automated security scanning by people who don’t know what they are doing has become an enormous hassle in so many ways and really is damaging the ability to find and handle true threats.

[0] https://twitter.com/badlogicgames/status/1267850389942042625


Speaking of "security exploits" consisting of reading publicly available information: Tarsnap has public mailing lists with public mailing list archives, and at least once a month I get an email warning me that my "internal emails" are accessible.


> I have a feeling CNAs are bringing this kind of low effort zero regard for false positive fatigue bullshit to CVEs. Might as well just rip that bandaid off now and stop trusting anything besides the debian security mailing list.

Red Hat (my employer), Canonical, and SUSE are also CNAs. I can only speak to ours, but I think our prodsec team does a great job with the resources they've been given. Nobody is perfect, but if you take the time to explain the problem (invalid CVE, wrong severity, bad product assignment, ...) they consistently take the time to understand the issue and will work with whatever other CNA or reporter to fix it. Generally we have a public tracker for unembargoed CVEs, so if it affects us and isn't legitimate or scoped correctly, you might get somewhere by posting there (or the equivalent on Ubuntu/SUSE's tracker).

Perhaps it is just the nature of the open source community Linux distros are a part of, though, that lets them apply it to CVEs as well.

Doesn't help with personal reports though. :-)

Curious, did you get CVE assignments against your personal site? 0.o


> I have a feeling CNAs are bringing this kind of low effort zero regard for false positive fatigue bullshit to CVEs.

Yes, being the discoverer of a CVE is a major resume item. Pen testers who have a CVE to their name can charge more. Companies can charge more for sending them.


Is there a way to return a custom 404 error handler for .git and a different one for a regular 404 in Apache? Never tried that before.


Check the ErrorDocument directive for .htaccess files.


That directive doesn't have to reside in .htaccess files. It works just as well inside a Directory, Virtual Host and Server contexts as well.

    ErrorDocument 404 /404.php
    
    <Directory "/.git">
        ErrorDocument 404 "Ah ah ah! You didn't say the magic word"
    </Directory>

https://httpd.apache.org/docs/2.4/mod/core.html#errordocumen...


How do they contact you? I have never got any report.


emails to addresses like security@domain.name or webmaster@


> Apparently because my site didn't return 404, their script assumed i was exposed and they oh so helpfully reported it to me.

There's no good reason that folder should exist except for a joke, so how is this not a helpful message in the vast majority of cases? All lint rules have exceptions, doesn't make them not useful.


I didn't ask you to lint my code (or server) though.

There's plenty of cases where a .git directory is just harmless; I've deployed simple static sites by just cloning the repo, and this probably exposed the .git directory. But who cares? There's nothing in there that's secret, and it's just the same as what you would get from the public GitHub repo, so whatever.

That some linting tools warns on this: sure, that's reasonable.

That random bots start emailing me about this without even the slightest scrutiny because it might expose my super-duper secret proprietary code: that's just spam and rude.


> That some linting tools warns on this: sure, that's reasonable.

To clarify, I'm not condoning annoying spam but if say e.g. Netlify or GitHub added a ".git folder should not exist on a public site" lint rule when you personally deploy your site, I would say it would be a net benefit.

> There's plenty of cases where a .git directory is just harmless

Pretty much all lint rules have false positives so this isn't a good yardstick. Can it potentially cause harm when you do it and is there's no beneficial reason to do it? If yes to both then it's an ideal candidate for a lint rule.


> Pretty much all lint rules have false positives so this isn't a good yardstick. Can it potentially cause harm when you do it and is there's no beneficial reason to do it? If yes to both then it's an ideal candidate for a lint rule.

A responsible person running such a linter does a sanity check before taking their positive and bugging someone else with it. An irresponsible one potentially causes harm by assuming every single hit is a major finding that should turn into a bounty payout.


> A responsible person running such a linter does a sanity check before taking their positive and bugging someone else with it. An irresponsible one potentially causes harm by assuming every single hit is a major finding that should turn into a bounty payout.

I already tried to clarify that I was talking about the general concept of good lint rules, not about people emailing for bounty payouts. We're in agreement that emails about bounty payouts for non-issues is stupid.


The reason you're getting downvoted, so you know, is that your original response heavily indicated you _were_ talking about the email reports.

You replied to someone complaining about getting emails and defended it with "but that directory shouldn't exist", implying you disagreed with their take.

You're arguing about something here that no one else is trying to talk about. The poster you originally replied to was only talking about the email case, so your response is contextualized in that case already.

If you original post had been "Yeah, I agree. That would make sense as a CI rule that you run, not as a scanner someone else runs" then you wouldn't have gotten any pushback, but your post was strongly implying a position you apparently don't hold.


> > There's plenty of cases where a .git directory is just harmless > > Pretty much all lint rules have false positives so this isn't a good yardstick. Can it potentially cause harm when you do it and is there's no beneficial reason to do it? If yes to both then it's an ideal candidate for a lint rule.

Yeah sure, it should be a lint rule, we can quickly agree on that. But that wasn't really my point: my point was that random people from the internet are running these kind of high false-positive linters without asking and start emailing people about it.


Well according to the post, the OP returned a cheeky message and any MK I Eyeball should clearly spot it as an intended condition. Automated scan-spam gets on your nerves pretty quickly.

I run a small vulnerability disclosure program and receive a ton of it - people clearly run automated scanners, which I presume create automated vulnerability reports, on things that are not even remotely dangerous AND have been specifically ruled out of scope for the program.

It's not helpful, it's time consuming and often people will complain if you don't answer their reports.


This is not a helpful message in the vast majority of cases. Lots of servers out there that always return 200


> Lots of servers out there that always return 200

That's poor configuration for most public websites that you want indexed by search bots that's worth fixing. It's called a soft 404, and makes it troublesome to detect when links are invalid, break or have moved. Google will even warn you about it: https://developers.google.com/search/docs/advanced/crawling/...


The vast majority of servers on port 80 are not public websites that you want indexed by search bots.


I'm a command-line development tools maintainer for an OS. I am not unfamiliar with high-level CVEs in my inbox with the likes of "gdb crashes on a handcrafted core file causing a DoS". I am unfamiliar with a real world in which a simple old-fashioned segfault in a crash analysis tool is truly a denial of service security vulnerability, but our security department assures us we need to drop all revenue work and rush out a fix because our customers may already be aware that our product is shipping with a known CVE.

There are occasions in which I recognize a CVE as a vulnerability to a legitimate possible threat to an asset. By and large, however, they seem to be marketing material for either organizations offering "protection" or academics seeking publication.

I think like anything else of value, inflation will eat away at the CVE system until something newer and once again effective will come along.


Ah yes, this also fits with the famous "no insecure algorithms" in which an auditor will check a box if your use md5, even if for a feature totally unrelated to security.


In fairness, those sorts of features tend to be subject to scope creep where they start being used for security.

For instance, Linus Torvalds (a very smart person) resisted using something stronger than SHA-1 for Git because he said the purpose of hashes isn't security, it's content-addressable lookup of objects. Which may have been true at the time, but then Git added commit signing. Now if you sign a commit, no matter how strong of an algorithm you use, the commit object references a tree of files via SHA-1. Git is currently undergoing an extremely annoying migration to support new hash algorithms, which could have been avoided.

Also, BLAKE3 is faster than MD5 and also far more secure, so if you're saying "It's okay I'm using MD5 because I want a faster hash and SHA-256 is too slow," there are options other than SHA-256.

If the thing you're trying to hash really really isn't cryptographic at all, you can do a lot better than MD5 in terms of performance by using something like xxHash or MurmurHash.

So, even if it isn't a security vulnerability, using MD5 in a new design today (i.e., where there's no requirement for compatibility with an old system that specified MD5) is a design flaw.


> Also, BLAKE3 is faster than MD5 and also far more secure, so if you're saying "It's okay I'm using MD5 because I want a faster hash and SHA-256 is too slow," there are options other than SHA-256.

True, but BLAKE3 isn't shipped as part of the standard library of many (any?) languages, whereas MD5 is. There are third-party implementations for a lot of languages, but using one of these brings up a lot of problems:

1. Are you sure the implementation doesn't have any bugs? AFAIK, the BLAKE3 team has only created C and Rust implementations, so anything else likely hasn't received the same level of care.

2. How are you going to notified of bugs or vulnerabilities in the implementation? For your language's standard library, it's usually easy to get notified of any bugs or vulnerabilities, but you're probably not going to get that from some random implementation on Github.

3. Pulling in the dependency can be an attack vector in itself. For example, if you use the Javascript implementation on NPM, you're now going to have worry about the NPM author having their account compromised and the package replaced with malicious code.


That's fair, I should have added that as an exception too. Another similar case: you're writing a shell script and you can assume the target machines all have md5sum installed but not necessarily b3sum.


Our security team at a previous employer previously added a systemwide checker to our github enterprise installation that would spam comments on any change to a file in which Math.random is used. The idea is that anyone using random numbers must be implementing a cryptographic protocol and therefore should not be using Math.random as it's not a CSPRNG.

So all the AB tests, percentage rollouts etc. started getting spam PR comments until they were made to turn it back off again.

Frankly if a teammate was writing their own crypto algorithm implemntation in the bog standard web app we working on, that would be more concerning than which RNG they're using.


I've seen exactly this many times in audits (gets them a high score!). If they flag it and not check the usage I know they didn't bother putting anyone good on the audit or only ran automated stuff and it is pretty much useless. The same can now be said for sha1, gets them results quickly and looks good on the final report.


Related, Apple marks any use of MD5 with a warning if you use their SDKs. Good luck getting rid of it if you’re using Swift, because the community has not yet decided whether silencing warnings is something they would like in the language or not. I’m getting kind of sick of using dlsym to fish out the function pointer :(


There’s the “executive” level of this stupidity where an app replaces their md5 OpenSSL calls with their own internal copy pasta of the function.

Look ma! We’re FIPS compliant now!


Unfortunately, that happens because most regulations try to enforce a black-and-white rulebook, which is easy on the auditors but extremely difficult on those being audited.

I now thinks most compliance regulations are by auditors for auditors... :-D :-D


TBH, if you are doing security with it, it's obviously wrong, but if you are not, it also is because way better (faster) options exist for non security usage...


[flagged]


It's sad, however, when a highly non-exploitable crash is treated as a five alarm fire while a "silently corrupts users data" falls to the wayside because people don't generally write security vulnerability reports for those.

I've heard from some people that they have considered filing security CVEs against non-security but high user impact bugs in software that they're working on, just to regain control of priorities from CVE spammers.


Agree, but having to make these judgement calls at all is a mistake. We need to get to 'just fix it'.


I don't quite get what you mean.

There's finite time and developer effort. You always have to make judgement calls about what to prioritize over what, you can't "just fix it" for literally everything unless you're in a very fortunate position of working in a codebase with minimal tech debt, a mature scope, and sufficient developer-hours.

If you're saying that the CVEs that amount to "update dependency X" or whatever should be "just automatically fixed" rather than have to be prioritized, I agree that should be true for a subset of em... But not every dependency update or CVE resolution is trivial, and even the supposedly trivial ones may still require a certain amount of testing or refactoring.


If the codebase is sufficiently complex, irrespective how mature and tech-debt-free, certain dependency upgrades are simply non-trivial (this includes the testing effort as well as the actual upgrade effort). Like they say "there are no small changes in a big system".

So resolving certain CVEs' is simply a delicate balance to be had between the actual damage potential and the amount of effort.


Non exploitable crash can be a denial of service, given the right configuration. Ie, filling up disk core file storage, crashing at the right time can force expensive operations to retry/rollback.


This is exactly the attitude we’re talking about. Ok, if you do a bunch of things maybe it could make the service throw a disk usage warning email your way. But a service that is actually crashing now is obviously quite a bit more important.


No, this is exactly the incorrect definition of the problem that vendors talking about.

Crashing a single users thread on a web service is the definition of useless, its not annoying anyone but the attackers session.


"Never fix it" is one extreme.

"Drop all revenue work and rush out a fix" is another.

The previous poster didn't say it should never get fixed, but rather that there's some nuance to be had in these things, and that fixing it in e.g. the next release is usually just fine too.


No disagreement here. What is dangerous for me is the idea that difficulty upgrading for security fixes does not predict the same difficulty for other fixes. It's not that security bugs are uniquely hard to patch, it's that dependency management on the whole is neglected and security gets the blame.

Those crusty old dependencies and the processes around them are an operational risk, we should be lowering the bar to just patching rather than picking and choosing.


You are assuming that this is about dependencies. OP's example is explicitly "gdb crashes when opening on a malformed core dump and can be used for DoS". If you were working on GDB and got this bug report, would you consider it a fire to be put out immediately? Or would it be a low-impact bug to be looked at when someone gets some free time?

The OP is complaining that, if there is a CVE associated for whatever stupid reason, the bug suddenly jumps from a "might fix" to "hot potato".


That's fair


Who is talking about "crusty old dependencies"? Or processes which are an "operational risk"? The previous poster never mentioned any of those things.


They get old and crusty when you have to choose not to patch, or de prioritize those not so serious bugs because the operational cost is too high.

Developers shouldn't have to make this call, the cost should be zero.


I think you're making all sorts of assumptions and extrapolations here that I'm not really seeing any hints of. What I see is that someone is responsible for dealing with CVEs, judges its severity as they come in, and concludes that a lot of them are just cruft and not really worthy of a CVE as such. Nothing more, nothing less.


I see your point


> really a reflection of awful development practices.

You don't know a thing about GP's development practices so perhaps you should be a bit slower to hurl accusations.


It will probably be less effort to patch (increment version number) a non existing vulnerability than to explain it to every customer that comes with an report from a third party auditor.

CVEs for non-vulnerabilities is like corporate trolling


Lots of CVEs are illegitimate. You have people creating whole "vulnerabilities" that are just long known features of various technologies. The worst one I'm remembering is the "discovery" of "Zip Slip" and "ZipperDown", which were both just gotchas in the zip format that have been known about for decades now. Both got trendy websites just like Spectre and Meltdown, and loads of headlines. ZipperDown.org is now an online slots website.

- https://snyk.io/research/zip-slip-vulnerability

- http://phrack.org/issues/34/5.html#article

- https://www.youtube.com/watch?v=Ry_yb5Oipq0


Hi there. Danny here, co founder at Snyk and the guy behind the zip slip research. First, at no point we claimed that this is a new type of vulnerability, the contrary, in every talk i gave, most are on youtube, i started with saying that it's a 30yo vuln, originally published in phrack showing the actual phrack issue. Secondly, the real problem here is that 30 years later, in some languages like Java, more than 90% of archive extraction implementations are vulnerable to this issue, like really vulnerable, RCE kind of vulnerable. so no, this is not just a "zip format gotcha", this is a real issue in real apps. this is the kind of vulnerability that every security person knows of, but not that many developers do. when they write extraction code, they most often do it without considering the path traversal issues. Some languages solved it by providing a simple api for you to extract an archive, like python's zipfile.extractall(), this is great! but others like java stayed behind and made the developers either write it themselves (wrongly) or copy and paste it from stackoverflow (most answers are vulnerable).. fast forward 30 years, still too many vulnerable apps (we identified several hundreds) that are vulnerable. since this is an issue of awareness, we thought it would be good to have a better name, just like "zip bomb" is well know, zip slip should be too. neither are zip only issues (others archivers and compressors are affected), but both make it simple to remember. anyways.. looking back it's very easy to see the impact of such research, i'm not talking about snyk's marketing and such, i'm taking about hundreds of open source projects fixing the issue (maintainers confirming it), CVEs assigned, many developes learning about it (blog post, talks, etc). peace


I believe the ZipSlip was/is a marketing effort for snyk.


I think this goes hand-in-hand with people naming security vulnerabilities and trying to make it a big spectacle. Sometimes it is a legit serious vulnerability, like shellshock or heartbleed, but a lot are just novices trying to get their 15 minutes of fame. I remember a few years back there was a "vulnerability" named GRINCH, where the person who discovered it claimed it was a root priviledge escalation that worked on all versions of Red Hat and CentOS. They made a website and everything for it, and tried to hype it up before disclosing what it was. Turns out the "vulnerability" was members of the wheel group being able to use sudo to run commands as root.


It's hard for me to think of a serious downside for named vulnerabilities. People who try to name sev:lo bugs get made fun of; it backfires.


It just causes extra annoyance at work. There have been a few times when some named vulnerability gets covered by a generic tech website, and the next day at work my inbox has 2-3 meeting invites from non-technical project managers to discuss what needs to be done to mitigate the vulnerability, regardless of its severity, and without even knowing if our organization is vulnerable to it.


It seems like there may be value in writing up a template for vulnerability comms:

“Hi folks, a new vulnerability has been disclosed (CVE-####-####). We’ve assessed this vulnerability, and it doesn’t affect our infrastructure because [we don’t use the affected software|we don’t use the vulnerable configuration|the vulnerability is mitigated by other security controls].”

If the worst impact of naming vulnerabilities is that security-related technical staff have to politely decline a couple meeting invites, I’m going to consider the practice an overall win.


It is, but also worth keeping in mind that vulnerability triage is just an annoying, resource-intensive process. Putting aside the "named vulnerability" thing, the most common prompt for a triage process is "new vulnerability discovered in a dependency"; that will happen several times a week in most significant products. Almost all of those vulnerabilities are marginal, and even the ones that aren't are usually not exposed in a typical use of the dependency. It's just an annoying problem.


No disagreement on it being a chore. The template doesn’t cut down on the actual work of triaging these, it just (hopefully, in a healthy org) helps avoid the “3 meetings per CVE with non-technical managers” part, which does seem avoidable.


This is a large part of my job. If something pops in the news that mentions our tech/industry/posture (or I suspect it will get c-suite attention) I immediately do a write up just like that. Depending on the severity (or even media “buzz”) I will include screenshots of my investigation and CC the relevant architects/managers. Still, that sometimes leads to managers wanting a meeting to discuss the email further but it GREATLY reduces panic emails when something crosses their newsfeed. On this topic - I also run our vulnerability management program and have to stress that CVSS score is not the lone factor on how much we care. I get lots of emails from people in the company saying “hey, did you see this”? for some random no impact vulnerability but am MORE than happy to thank them for the vigilance and explain why it’s not an impact because I want them to care.


I remember when people in the security community started filing CVEs against the TensorFlow project, claiming that code execution was possible with a handcrafted TensorFlow graph, and the team would have to try and explain, "TensorFlow GraphDefs are code".


The whole situation around CVE in Tensorflow is very painful, you get GitHub security notifications for any public repository using TF because of a "known CVE", even though it's basically just a train.py script that is not deployed anywhere.


I understand the frustration, and I'm pretty sure the root cause is straightforward ("number of CVEs generated" is a figure of merit in several places in the security field, especially resumes, even though it is a stupid metric).

But the problem, I think, contains its own solution. The purpose of CVEs is to ensure that we're talking about the same vulnerability when we discuss a vulnerability; to canonicalize well-known vulnerabilities. It's not to create a reliable feed of all vulnerabilities, and certainly not as an awards system for soi-disant vulnerability researchers.

If we stopped asking so much from CVEs, stopped paying attention to resume and product claims of CVEs generated (or detected, or scanned for, or whatever), and stopped trying to build services that monitor CVEs, we might see a lot less bogus data. And, either way, the bogus data would probably matter less.

(Don't get me started on CVSS).


this sounds similar to problems with peer review in academia. it mostly works fine as a guardrail to enforce scholarly norms.

however many institutions want to outsource responsibility for their own high-stakes decisions to the peer review system. whether it's citing peer-reviewed articles to justify policy, or counting publications to make big hiring decisions.

It introduces very strong incentives to game the system -- now getting any paper published in a decent venue is very high-stakes, and peer review just isn't meant for that -- it can't really be made robust enough.

i don't know what the solution is in situations like this, other than what you propose -- get the outside entities to take responsibility for making their own judgments. but that's more expensive and risky for them, so why would they do it?

It feels kind of like a public good problem but I don't know what kind exactly. The problem isn't that people are overusing a public good, but that just by using it at all they introduce distorting incentives which ruins it.


My basic take is: if "CVE stuffing" bothers you, really the only available solution is to stop being bothered by it, because the incentives don't exist to prevent it. People submitting bogus or marginal CVEs are going to keep doing that, and CNAs aren't staffed and funded to serve as the world's vulnerability arbiters, and even if they were, people competent to serve in that role have better things to do.

The problem is the misconception ordinary users have about what CVEs are; the abuses are just a symptom.


I suspect for both peer review and CVEs, and probably some similar situations I'm not thinking of, it's not just a misconception, it's often more like wishful thinking.

People really want there to be a way of telling what's good and important that doesn't cost them any money or effort. Ironically these systems can sort-of work for that purpose, only if people don't try to use them for that purpose.


I think both are instances of Goodhart-Campbell-Strathern's law: "When a measure becomes a target, it ceases to be a good measure."


The whole problem is that at some point people started seeing CVEs as an achievement, as "if I get a CVE it means I found a REAL VULN". While really CVEs should just be seen as an identifier. It means multiple people talking about the same vuln know they're talking about the same vuln. It means if you read an advisory about CVE-xxx-yyy you can ask the vendor of your software if they already have a patch for that.

It simply says nothing about whether a vuln is real, relevant or significant.


This is also annoying because if you ask for a CVE you can get placed in the bucket with people who are just looking for a thing they can talk about, when in fact you’d like to make the bug searchable to other people.


I feel this is the consequence of paying people for security bugs reporting (and only security bugs reporting). People start to inflate the number of reports and no longer care about proper severity assignment as long as it get them that coveted "security bug" checkbox. I mean I can see how bounty programs and projects like hackerone can be beneficial, but this is one of the downsides of it.

CNA system actually is better since it at least puts some filter on it - before it was Wild West, anybody could assign CVE to any issue in any product without any feedback from anybody knowledgeable in the code base and assign any severity they liked, which led to wildly misleading reports. I think CNA at least provides some sourcing information and order to it.


Didn't check who filled those bugs, but I've seen companies requiring having discovered CVE to apply for some jobs, and the natural consequence is gaming the system...


I checked, it seems to be a student of Seoul National University, South Korea. https://github.com/donghyunlee00/CVE


Huh. I wonder if it's a student doing an assignment and not realizing they're submitting to a real database.

Their other GitHub work is following tutorials, labs and courses.


A second guy is also doing this. CVEs have a reference to third party advisories such as https://github.com/koharin/koharin2/blob/main/CVE-2020-35185

This repository does no longer exists.


How to mark a CVE as invalid or request an update? I tried the Update Published CVE process, but nothing happened not even a reject, just no answer. Multiple CVEs where reported to OpenWrt which are invalid, but we (OpenWrt team) haven't found out how to inform Mitre.

For example CVE-2018-11116: Someone configures an ACL to allow everything and then code executing is possible like expected: https://forum.openwrt.org/t/rpcd-vulnerability-reported-on-v...

and CVE-2019-15513: The bug was fixed in OpenWrt 15.05.1 in 2015: https://lists.openwrt.org/pipermail/openwrt-devel/2019-Novem...

For both CVEs we were not informed, the first one someone asked in the OpenWrt forum about the details of this CVE and we were not even aware that there is one. The second one I saw in a public presentation from a security company mentioning 4 CVEs on OpenWrt and I was only aware of 3.

When we or a real security researcher request a CVE for a real problem as an organization it often takes weeks till we get it, we released some security updates without a CVE, because we didn't want to wait so long. It would also be nice to update them later to contain a link to our detailed security report.


> When we or a real security researcher request a CVE for a real problem as an organization it often takes weeks till we get it, we released some security updates without a CVE, because we didn't want to wait so long.

From your point of view, I'm sure that's probably quite frustrating. From my point of view (as a user), that's completely absurd, should never happen, and is a huge deficiency in the CVE program.

Fortunately, it's possible for the OpenWRT project to become a CNA [0] and gain the ability to assign CVE IDs themselves.

See "Types" under "Key to CNA Roles, Types, and Countries" [1]:

> Vendors and Projects - assigns CVE IDs for vulnerabilities found in their own products and projects.

--

[0]: https://cve.mitre.org/cve/cna.html#become_a_cna

[1]: https://cve.mitre.org/cve/request_id.html#key_cna_roles_and_...


I would email MITRE responding to your own email that they haven't responded to, after a couple months. I had to request a status update nearly two months later to get a response once, I suspect they are busy.


We get dozens of "high-priority" security issues filed that are resolved with "we're an open-source project; this information is public on purpose".

Our bug bounty clearly outlines that chat, Jira, Confluence, our website - all out-of-bounds. Almost all of our reports are on those properties.


Mitre is a us gov supported team, and previously they could not scale to the need of their efforts. They did the best they could, but they still had a lot of angry people out there. The whole world uses CVEs but it is US funded by the way.

In come new CNAs, scale the efforts through trusted teams, which makes sense. The mitre team can only do so much on their own.

Unfortunately I don’t think anyone will be as strict and passionate about getting CVEs done right, like the original mitre team has.

Here is to hoping they can revoke cna status from teams who consistently do not meet a quality bar.


The problem though is that issues with CVEs are not caused only by bad CNAs. MITRE (understandably) doesn't have the resources to verify every CVE request it receives, which have resulted in bad CVE details being filed on multiple occasions.

I wonder if maybe, instead of trying to fix CVEs, we could try to think about creating alternatives? I know some companies already use their own identifiers (e.g. Samsung with SVE), so perhaps a big group of respected companies can come together to create a new unified identifier? Just an idea though.


Getting everyone onboard would be tough, some have tried and failed like osvdb. It requires funding and passionate folks to run it. I think what we could do is spin the cve arm of mitre off into a non profit, and asked all major companies who want to be on the board to chip in and support it. This could have challenges too that would need to be addressed.


A security auditor once reported a Adobe generator comment in an SVG file as a moderate "version leak vulnerability" to me.


This is a staple of audit report stuffing. Somebody got an idea that disclosing a version of anything anywhere is a huge security hole, so now any publicly visible version string generates a "moderate" (they are usually not as brazen as to call it "critical") security report.


So... the real question is, why are CVEs that are just packages of software being accepted to the CVE database anyways? If its in a Docker image, it should be immediately rejected: report the CVE for the precise upstream project instead.


> ... why are CVEs that are just packages of software being accepted to the CVE database anyways?

Ultimately, because there are now a few hundred [0] CNAs [1] which are "authorized to assign CVE IDs" and, AFAICT, there is nothing in the "CNA rules" [2] that requires them to (attempt to) verify the (alleged) vulnerabilities -- although, in at least some instances, I assume it simply wouldn't be possible for them to do so.

--

> 7.1 What Is a Vulnerability?

> The CVE Program does not adhere to a strict definition of a vulnerability. For the most part, CNAs are left to their own discretion to determine whether something is a vulnerability. [3]

Officially, a "vulnerability" is:

> A flaw in a software, firmware, hardware, or service component resulting from a weakness that can be exploited, causing a negative impact to the confidentiality, integrity, or availability of an impacted component or components.

Fortunately, there is a "Process to Correct Assignment Issues or Update CVE Entries" [5]. In instances of multiple, "duplicate" or "invalid" CVEs, I can see how this might be both frustrating and time-consuming for software developers, though.

--

[0]: https://cve.mitre.org/cve/request_id.html

[1]: https://cve.mitre.org/cve/cna.html

[2]: https://cve.mitre.org/cve/cna/rules.html

[3]: https://cve.mitre.org/cve/cna/rules.html#section_7-1_what_is...

[4]: https://cve.mitre.org/about/terminology.html#vulnerability

[5]: https://cve.mitre.org/cve/cna/rules.html#appendix_c_process_...


What if the project is the docker image? What if the docker image is the primary distribution method of the software?


That sucks. Perhaps the most annoying part of modern infosec is the absolute deluge of noise you get from scanning tools. Superfluous CVEs like this contribute to the sea of red security engineers wake up to when they look at their dashboards. Unsurprisingly, these are eventually mostly ignored.

Every large security organization requires scanning tooling like Coalfire, Checkmarx, Fortify and Nessus, but I've rarely seen them used in an actionable way. Good security teams come up with their own (effective) ways of tracking new security incidents or vastly filtering the output of these tools.

The current state of CVEs and CVE scanning is that you'll have to wrangle with bullshit security reports if you run any nontrivial software. This is especially the case if you have significant third party JavaScript libraries or images. And unfortunately you can't just literally ignore it, because infrequently one of those red rows in the dashboard will actually represent something like Heartbleed.


> The current state of CVEs and CVE scanning is that you'll have to wrangle with bullshit security reports if you run any nontrivial software.

Especially if you have customers who outsourced their infosec to the lowest bidder who insist every BS CVE is critical and must be fixed.


This ^^^. I have experienced it first hand for the last year or so, and it gets really annoying!


The non stop stream of emails every day certainly sucks but falls far short of my employers false positive process which requires several emails explaining why it’s false positive and following up to make sure the waiver is applied so as to not impact our security rating instead of just reassigning the jira ticket and adding false positive label.


We use Nessus and it's not too bad on the false positive front. I usually check the scan results every week or two to see if it finds anything new, and I know our Head of IT also keeps an eye on them. In an ideal world we'd automate this away but have a raft of more pressing priorities.

We also use tools like Dependabot to keep an eye out for vulnerabilities in our dependencies, and update them to patched versions. This is genuinely useful and a worthwhile timesaver on more complex projects.

It's easy to be cynical about automated scanning (and pen-testing for that matter) and, although it's often needed as a checkbox for certification, it can certainly add value to your development process.


Communication breakdown.

It's a bit naughty how "security researchers" don't appear to make a good effort to communicate upstream.

And the fact that Jerry has problems reaching out to NVD or Mitre is worrying.


See additional context in this issue in docker-library/memcached: https://github.com/docker-library/memcached/issues/63#issuec...

And this issue in my docker-adminer: https://github.com/TimWolla/docker-adminer/issues/89


CVE DoS -- post so many CVEs to paralyze the system completely.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: