This resonates. Unfortunately, the CVE database(s) are too noisy to be useful. It could benefit from higher standards and more thorough vetting. (Maybe take some lessons from academia.)
A "security researcher" once filed a CVE for a regular bug in Caddy [0], making claims that were totally provably false. It was assigned 7.5... the same as Heartbleed [1] -- yes, the one that leaked almost all the private encryption keys on the Internet back in 2014. When I appealed to NVD for retraction (or whatever they do in this case), I never heard back, despite several emails / form submissions. It is not well-maintained. It is poorly managed. There is virtually no oversight. NVD is a trainwreck.
More recently I inadvertently discovered a 0-day RCE in acme.sh [2]. (ACME clients are security-sensitive contexts since they typically deal with private keys and download signed credentials.) Anyway, it was assigned a CVSS 3.x score of * 9.8 * [3] -- I imagine that should be like "cyber-nuclear meltdown" territory, but no, this was actually benign as far as we can tell. Probably deserves more like a 4 or 5 or something. So, again: There is virtually no oversight. NVD is a trainwreck.
Anyway, the whole system is broken, and I'm effectively ignoring CVEs now. But if someone just tells me to patch my <whatever>, I'll probably do it, and that's good enough for me.
It seems a lot of infosec folks have these shallow "X = bad" mappings in their brains. Like in that Caddy issue, "out of bounds read = bad" even though realistically you can't do anything bad with it.
I see similar thinking all the time with bug bounties at work. We had an XSS report once, but it was under an odd domain that doesn't host any authenticated resources. Yet "XSS = bad" so the report had a way higher urgency score than it should've. Sure we want to fix it - and we did - but it wasn't a credential-stealing nightmare scenario XSS.
> "out of bounds read = bad" even though realistically you can't do anything bad with it
Schneier's law [1] is at play here. Especially in situations where the person who wrote the bad code in the first place is later tasked to fix it. If they couldn't see the problem the first time it is going to be tough for them the second time around.
I've gazed longingly at a promising bug for weeks with no idea how to weaponize it, only to finally give up and ask someone smarter who tends to point out a trick I had never seen before. Even with a career in security old enough to buy beer, I am still amazed by the clever shit I miss.
1. "Any person can invent a security system so clever that she or he can't think of how to break it."
This is a fair point. I actually hesitated a bit before posting my comment because I don't actually know 100% that the XSS was harmless. But the reporter didn't actually demonstrate anything other than an alert('owned').
You may have noticed many websites host user-uploaded content on a different domain to their main site. Github delivers some things from githubusercontent.com, Google some things from googleusercontent.com, Reddit delivers some things from redditmedia.com and so on.
The reason they do this is to give a big layer of protection against the harms of XSS - even if user-uploaded content manages to execute arbitrary javascript on googleusercontent.com the that javascript can't access cookies for google.com as it's hosted on a different domain.
Some scoring guides rate XSS as very high risk, assuming you don't have this mitigation in place. resonious had the mitigation already ("it was under an odd domain that doesn't host any authenticated resources") so the XSS wasn't a very high risk.
With that said - some people will thank you for demonstrating how a small security problem can be escalated into an account takeover, but other people will call the cops on you for hacking their website or threaten to sue you. So I would say if you're reporting XSS it's reasonable to stick with an alert box, unless you know the person receiving the reports is reasonable.
>Google some things from googleusercontent.com, Reddit delivers some things from redditmedia.com and so on.
Exactly:
>If you are injecting script in subdomains of (sandbox) domains such as: [...] ...we won't file a bug based on your report, unless you can come up with an attack scenario where the injected code gains access to sensitive user data.
It depends. Where is that JS executed? On the login page? The payment details form? In a restricted IFrame serving a tracking pixel? On a static page handling public document downloads with a different domain to the logged in contexts?
With a good security report, you want to include example impact. "I can run alert." - meh, but should be checked/patched just in case. "I can run arbitrary JS on a page collecting payment details, without CSP restriction." - now that's immediately bad.
There is a medium-to-weak argument that enforcing these minimum standards even in clearly benign places raises standards everywhere, which means it is much less likely to show up in the really bad places.
I'm just making debate however, I don't think most people are playing 3d chess when they ask for these changes, they just want the line item on their report to clear up.
> It seems a lot of infosec folks have these shallow "X = bad" mappings in their brains. Like in that Caddy issue, "out of bounds read = bad" even though realistically you can't do anything bad with it.
As others have pointed out, no few "unexploitable" issues have turned out to be entirely exploitable in the hands of the right person. In a world where innocuous vulnerabilities can be chained together into very dangerous ones, this gets much worse. As a colleague of mine described to me, CVE math means 1+1+1=10.
More subtly, this interacts with one of the weirder ideas in security. Vulnerabilities exist before they're known. This means that there's likely a series of vulnerabilities lurking in every bit of software you use. It's hard to do much about those with certainty, but you can do something about the bug in front of you to prevent it from contributing to CVE math.
To put it another way - risk analysis has room for error. Don't be too certain of yours.
I have personally rejected a candidate that claims to know professional security tools but maintains that it is not his job to filter out false positives ("because they don't exist") before presenting the results to the developer team. The same candidate would also say "you need a firewall" but would not be able, in an interview setting, to explain how to protect the database server using a firewall - i.e., what exactly to allow.
Yes, unfortunately any kind of staff position that does not deliver product attracts these types who just want to hide and never be accountable for delivering value to the business. I'm not saying the positions aren't needed or valuable, but just that it is appealing to the wrong kind of people.
And unfortunately, their value is often directly proportional to the amount of workload they add to the productive segments. People wonder why security teams are the first to be cut during hard times, but this is basically why. That said, I can see both sides of it, security is obviously of great importance. But there just has to be a better way, perhaps some categorization of threat models cross referenced against the CVEs/etc.
It seems a lot of non-infosec but technical folks have a pattern of shallow first-order thinking. Like in that XSS issue, "no authenticated resources = not a big deal" even though realistically you could redirect end-users to a phishing domain (I can assure you that a much larger percentage than you'd like would fall for this), or create and delete invisible DOM elements in such a manner to exploit a UAF vulnerability in their browser's rendering engine to perform a sandbox escape and get code execution in your users' userland, where they'll proceed to dump all of your users' saved credentials and start emptying bank accounts - all because you couldn't imagine anything bad happening from an XSS on a site with no authenticated resources and therefore chose not to prioritize fixing it. Even CPU-level speculative execution vulnerabilities can be invoked through sandboxed JS running in a browser.
Deprioritizing an XSS vuln in an end-user facing website you built because there isn't sensitive, authenticated data on that domain is like being the owner of a construction company that built a hydroelectric dam incorrectly, who noticed visible cracks on the dam that aren't supposed to be there, and decides to not tell anyone and "maybe fix it later" because the hydroelectric generator is still working fine and cracks in the dam don't cause generators to stop working.
OOB read allows an attacker to measure internal status such as stack status, and memory allocator status, this widget along with another flaw makes exploits much more reliable.
You -can- definitely do bad things with the data gained from it.
For the memory allocator data, read up on the "house of" attacks against glibc's allocator.
> "out of bounds read = bad" even though realistically you can't do anything bad with it.
I have absolutely seen more than enough exploits for "obviously unexploitable" vulnerabilties that I think the "out of bounds read = bad" mindset is the only reasonable mindset to have.
I certainly find there are many people who take that naive view.
The right thing to do is to triage them in your context and make your own risk assessment. The CVSS numbers are a guide to which ones you should prioritise looking at, but not a risk assessment.
However, in the security and risk world, you have to take a worst case view in the absence of further reliable information. This can lead to fixing things that maybe don't really need fixing, even when a triage has been done. It's often hard to discover enough reliable information.
One thing that academia takes seriously and the CVE database doesn't is the reputation of the researcher(s). The CVE database allows one to anonymously make a serious claim against other people's work, and thus avoid consequences if the CVE turns out to be incorrect. In academia this would be unacceptable: reputation is everything.
Now I'm not saying academia does everything right, and it does have a strong tendency for credentialism which is at odds with how security research has always operated. But it makes judging people by their actions a whole lot easier...
There are still corners of infosec where doing the wrong kind of research will get you the business end of a massive lawsuit... or a pair of shiny new bracelets. A lot of us started in security in an era where that was more of the rule than the exception.
That said, reputation is quite real. For example, taviso is well known for finding weird and scary things.
CVSS is not a measure for risk. I feel like it’s pretty hard to define an objective measure for “how much should you care about this” that applies to everyone, since you’d need to know how common the affected software is among all computers and how many of them are using the specific vulnerable configuration, etc.
What you can do instead is have a measure for “if this affects you, it’s pretty bad/not that bad/…”.
Heartbleed wasn’t an RCE, it just leaked sensitive data. RCE often results in a total loss of confidentiality, integrity and availability, so it receiving a higher severity rating makes sense.
Honestly if you just stick with CISA's known exploited list then you will be ahead of most operators and spend less time ghost hunting stupid shit.
I really have come to despise CVSS (and especially how tenable will put predictions in the CVSS column for officially unrated vulns) and how assessors seemingly so not have to justify their 'findings'. Not exploited, no poc, needs root to use, why is this a High? Come back to me with something good.
I personally think this is all by design to create more jobs in the booming snake oil infosec industry. Hear me out before you get the pitchforks.
I've started seeing some of the most incompetent people I used to work with are suddenly now "director of security", "senior infosec" and things like that. These were people that struggled to remember what an IP address was when I worked with them. I highly doubt they all suddenly decided to actually learn anything about how computers work.
The icing on my anecdote proof cake is my junkie brother that lives with my mom contacted me to ask about how he could get into the computer security industry about 2 months ago,.
It’s been like this. There are a lot of people that got into security without really a passion for it. Lots of bs. The way I think of it is all of this revolves around the fire/heat generated from finding vulns (on both sides). If ur not finding vulns/really understand/actively trying to understand then ur just blowing smoke and trying to stay warm. That’s just how it is.
You're conflating two systems here: CVSS' ratings and CVE's vulnerability IDing
Something receiving 9.8 when you feel like it should be a 4–5 is a common enough complaint, though typically not this extreme. If the parameters are filled in correctly (you didn't mention even checking why the result was 9.8) then that's not a flaw in the CVE system.
Conversely, CVE authorities not responding to requests for deletion aren't the same as using an inaccurate impact/exploitability calculation system called CVSS
> Unfortunately, the CVE database(s) are too noisy to be useful.
I used to hate on CVSS scores a lot more than I do these days.
Sure, the scores are rarely correct and sometimes nonsensical. To be fair, it's impossible to have a scoring system that precisely ranks every potential risk in the optimal order for everyone.
Technically, the best answer is to take every report, evaluate it internally based on exactly how this code is used, where and for what and create our own risk assessment and prioritize changes based on that. That would get us risk scores as good as we're going to get for our product.
Nobody has time for that. Even in the best funded organizations I've seen, it's not going to happen.
Turns out it's like an order of magnitude cheaper to just say we must address Critical, High and Mediums and call it a day. As much as it bothers me at a technical level, because I know we're fixing stuff that wasn't worth fixing and leaving out some important fixes that got mis-scored too low, the pragmatic reality is this is easier to do and thus can get done. On average, that will capture many of the things that really did need fixing so our security did get better. Yes, we did waste time fixing some that didn't need it, but it was less wasted time than it would've taken to do a deep evaluation so we still saved some time. So overall, not bad.
Do you have any thoughts on CVSSv4[0]? It appears to incorporate finer-grained and organization-specific scoring to address issues many have with the one size fits all approach currently used for CVEs.
I recently had similar thoughts about a security vulnerability in a storage component, which is sometimes even intended to be exposed to the internet as a static file host.
That thing had a security vulnerability such that you can do one curl-call and get the entire environment of the system, including the global, special cased admin user credentials for that system. So, practically speaking, an unpatched system has no authorization and is open to anonymous access.
Now I'm not sure if I'm weird, but I have a hard time imagining a worse bug in the authorization system in a data store than this kind of trivially exploitable fail open. Easily in the "Stop and drop everything and start patching". But nah, it's just a 7.5. All is good.
On the other hand, the fact that the root user can bypass some restrictions and upload files with some permissions is rated as an 8.8 because it's a possible RCE jump. Naturally.
That reminds me of bogus `npm` results. You use a regex at build time? Oh, apparently you are exposed to a ReDoS attack even though that regex isn’t available at runtime.
Or that time when Python disallowed string to int conversion of long strings because it is a DoS vulnerability. The lack of sanitisation led to DoS vulnerabilities in applications, but the conversion is obviously not a vulnerability in itself. Oh, and now those apps have DoS attack that just halts the application because the conversion throws ValueError.
It’s hard to say what the solution to that problem is. Oftentimes security vulnerabilities indeed don’t cause any problems alone and require a chain of other exploits to actually cause trouble. At the same time you don’t want to treat every function that runs in O(n^2) on input as a potential DoS attack. Distinguishing between a simple bug and a vulnerability requires contextual knowledge and expert judgement that simply cannot be built into bureaucratic organisations like MITRE.
This is my favorite lately. Sure, some insane people use these packages for backend services, but we are just building a big JS blob that's statically shipped to the browser; none of the "vulnerabilities" in them apply. But try getting this across to people who think they need to track security issues they don't understand (not fix them, of course).
> Distinguishing between a simple bug and a vulnerability requires contextual knowledge and expert judgement that simply cannot be built into bureaucratic organisations like MITRE.
Why can’t it? MITRE runs FFRDCs. Having worked for an FFRDC (although in electrical engineering, not computer security), if you were to tell me that engineers in FFRDCs or UARCs in my domain simply cannot utilize contextual knowledge or expert judgment (or that their knowledge and judgment would be completely overridden by management such that it appears they can’t) then I would wonder if you ever interacted with that kind of organization.
Is computer security somehow different to e.g. radar or comms systems engineering?
The scope of use is very underspecified. If your program is disconnected from the internet, all remote execution bugs stop being security bugs. If you can provide specific input to run arbitrary code into a program that already requires you to have elevated privileges to run, it is not a security bug. Etc, etc.
Sorry: I'm confused. Are you saying this is why an organization like MITRE cannot build contextual knowledge and expert judgement into itself? Or are you saying this is how computer security differs from radar and/or communications systems engineering?
> Or that time when Python disallowed string to int conversion of long strings because it is a DoS vulnerability.
You're right that applications should have range checking of course, but the default limit is 4,300 digits, which seems enough for all but the most specialized applications; this number is huge. It makes sense to have a reasonable limit so your server isn't using all the memory and CPU time by accident. It's the same as max recursion limits and the like.
Personally I don't think any DoS is a "security problem" though. It can be a problem, just as many bugs are, but "site is unavailable for a bit" or "my desktop application crashed" is just not on the order as "I lost all my money". Last month my bank's website was unavailable for a week, and it was annoying but basically fine with no real serious consequences.
It's awful that a system meant for the benefit of maintainers and users is now being perverted into such theatrics. In a way, if you want to undermine the concept of security this is one of the ways you can do it.
On the user side of this token I experience similar theatrics. Monthly I have to deal with a list of vulnerabilities that are detected within our software. Some are easy fixes, like updating a package manager which equates to just rerunning a pipeline. Others are not, and I found myself recently trying to rebuild an entire container build process because the software is on a biannual release process and they'd used the x/crypto package in Go which contains an SSH client and server. The server was vulnerable and the software we were using made use of the client, but not the server. Regardless our software was flagged with a high vulnerability. The utilities available are smart enough to take apart my Go binary during container inspection but not smart enough to figure out if I actually use the vulnerable thing. The resulting toil and theatrics aren't just a time waster, they're a soul sucking activity laiden with difficult tasks to achieve minor outcomes like, "How do I make sure we miss the possibility of even irrelevant CVEs showing up in the first place" - which is probably not what you want.
If it weren't for CVEs it would be a lot more difficult to convince management to update the codebase and get rid of legacy crap even though I'm fully aware that almost all the CVEs don't apply to our project. It's just better for productivity/QOL.
I think it's normal that CVEs don't convey all the nuance and at the end of the day, it's up to you to make an assessment on whether you're vulnerable. Though curl CVE seems like nonsense no matter how you look at it, I could see it might be an issue if delay was being set by untrusted user... but they could also set a very short delay without taking advantage of this flaw. So moot point.
And where if you beg them to allow code review (especially for the code made by your incompetent offshore teams) they say it's too expensive/uses too much developer time, but then they'll pay a subscription for garbage static analysis tools that's enough to cover multiple full time dev salaries.
Hell, SREs will tell you it's better this way - if you're regularly burning your house down and rebuilding it anew, there isn't time for maintenance problems to creep in!
Marking a defect as critical in typical large co control governance leads promptly toward re-prioritizing work and sometimes disabling otherwise-online and profit-generating systems.
Some buyers insist on this in contracts with technology providers.
That feels very much like burning cash, even if it's lost profits and not technically destroying a tangible object.
Basically the programmer (not the attacker) had to write code where an object contained itself
HashMap<String,Object> map=new HashMap<>();
map.put("recursive",map);
After this, Jackson would indeed stack overflow if you asked it to wrap the object to JSON. Then again, half the build-in Java functions (e.g. getting an object hashcode for the map object) also fail for a recursive structure.
The issue remains open 3 months later, Mitre still thinks it's hella serious, and people have yet again learned to just ignore their CI warning about CVEs
There's lots of common sense missing in how CVEs are used at ground level in a company also.
Like, say, I use node.js in my project solely in the build/deploy process, and not at all in the deployed code. Something like using the serverless framework to make python AWS Lambdas.
The processes in place often can't make that distinction, so some "high /critical node.js when used as a web server" problem halts all builds, even though there's no node.js in the deployed application, and node.js is only used cli style.
This is why having an accurate scope is key. Easiest way to do that in an hour is to prefer to scan a workload like a container instead of a code repo. Or have a scanner that understands how something is built (and ideally how it runs).
As I mentioned a couple days ago [1], this problem will get much worse in the EU with the upcoming CRA. Either the only software vendors left will be big ones with the capacity and leverage to counter these CVEs, or companies will turn to obscure, bug-ridden software which have no CVEs because no one bothers to research them.
expect closed-source analog of GNU, and a license that forbids showing source code, plus tools for obfuscating binaries, masked as "optimizers". there will be ademand for anything that reduces chance of CVE discovery and that reduces open source SBOM entries.
open source most likely becomes a liability under CRA
Yes, and the next step is that the vendor will sue researchers for posting CVEs which they consider "bogus", accusing them of hurting their business.
It's not so much as open source becomes a liability, but rather "well known", whether open or not, becomes a liability. If you pull a random json library from github that does not even appear on any vulnerability database then you will probably be fine.
Shit, I think the best way to fight this is to file bogus CVEs against propertiary software vendors. You'll see the industry demand CVEs be fixed within a month.
You can after-all, file CVEs anonymously. It takes no effort to stand up a fake "research firm", or two, or three doing so.
Bonus points if the CVEs are against all the shitty cybersec scan tools.
You can file CVEs anonymously, but CNA don't have to assign a CVE number if they consider submission bogus. Depending on the CNA they may also verify the submission themselves or contact the vendor before assignment (usually vendor has to confirm the vulnerability before the publication, often CVE publication date is decided together with the vendor if the vulnerability is not already public).
Of course there are shitty CNAs that don't care. But honestly CVEs are a useful tool for communication, people (both security people and non-security people) should stop obsessing over them.
So what kind of good outcome you propose? If I correctly understand what you are talking about:
- demand open source projects to fix all CVEs within short fixed time AND all projects integrate all upstream and dependency changes almost immedeately (or else, maintain array of older versions that are still used by someone)
- demand proprietary software vendors to avoid using any open source project that does not provide the above, and in a short, fixed time switch from using any such dependency that has ceased providing these guarantees (e.g. maintainer quit) to another one that does.
which still leaves you with the risk of breaking stuff on upgrade, because now your product is basically baby debian sid, because you must pull newest versions all the time, and cannot stay on a stable version (that will not get patches)
agree on "well known" part.
also, it will depend on particulars, how the developement must be disclosed.
e.g. if vendor drops copy of public domain libtom into its git, changes it up a bit — is that still a separate dependency?
what if vendor goes "hey, siri, copilot gpt4, write me tls handshake implementation" and uses whatever is the resulting garbage, is that ok?
The bogus CVE problem has caused delays in my projects because the CIO wants our COTS scanner tool reports to have 0 CVE's or a detailed explanation on why it is not an issue.
Also I'm having difficulty communicating: CVSS is not a measure of risk, and that many of the ReDoS vulns are very much dependent on the context.
My all-time most hated CVE is CVE-2022-24975. This is not a vulnerability - it's a user education issue with not understanding how Git works. The Git documentation not explicitly explaining the dangers of using mirror was rated a * 7.5 *. Due to the policy of having no high CVEs, this ground development at my company to a halt for an entire day until our security team put an exemption in place.
Service-level contracts and governmental requirements mean that a critical CVE needs to be addressed in short order, so non-critical bugs that get marked that way can cause real problems.
Bureaucracy destroys common sense once again.
If every developer opposes doing anything with bogus CVEs and calling them out for the BS they are, with a detailed explanation of why, then we might get some changes, but unfortunately just mentioning anything "security" has gotten such a paranoic response from most of the population that it's a difficult battle.
Outside of regulation, we also have the dumpster file called cyber risk ratings pulling more and more leverage in vendor risk assessments; good luck arguing with an enterprise vendor risk department why having 100 9.8 CVSS score vulnerabilities actually don't impact anything
I work on this exact problem for a cybersecurity company, and it’s super challenging 1) because it’s like trying to assign a probability to an earthquake (how do you validate your model with such few ground truth events?) and 2) like you mention, the more accurate we make the model, the more pushback there is as people begin to lose trust when it doesn’t match their intuition.
What’s a good way to make a convincing case of “trust us, we know this risk assessment doesn’t look intuitive but it’s better than the common XYZ approach because…”? Or is this a losing battle, and we should just allow customers to rank by CVE severities if that’s what they really want to do?
Some smaller startups are starting to work toward providing cyber VaR — quantify the probabilistic loss in financial terms over a given time period. This is really the only direction forward in cybersecurity IMO as risk must ultimately be modeled with statistics, not hunches. But I don’t think the industry is ready for it (and neither are the models to be frank).
There may be nothing novel here for you given you're already working in the problem area, but as established a CVE measure is completely hypothetical on its own; if your customers insist on looking at one variable (CVE), you could always make the case that CVE is incomplete on it's own and should always be supplemented with data such as:
a) is it actually exploitable
b) is someone exploiting it in the wild
c) accessibility of component/asset/application/whatever that carries the CVE (e.g. is it internet-facing or a non-networked subcomponent hidden under 15 layers of defense)
d) other exploitability scoring methods, like EPSS
e) etc...
(sometimes I like to use the analogue of trying to gauge your body's health based on one variable, like your body temperature - it can be high temporarily [in case of a flu] but it doesn't tell you anything about your overall health [as we all get sick occasionally])
The VaR approach is sensible but has to start from much further down in the root of the problem by first creating a value-based catalogue of all your assets, after which you should simulate all prospective attack paths towards all assets (maybe above a certain value threshold to simplify the model a bit), then overlay with attack / exploit probability data (simple example would be a CVE dataset + EPSS scoring) and finally you have something resemblant of an actual data-based risk model you can quantify impacts on. It's quite a large task and I don't think there's a single player doing all areas, you'd have to patchwork together multiple different datasets.
Still, it's probably where the industry will end up in the next 10-15 year timespan.
(I'm somewhat involved in the area too but just as a threat data provider for larger models.)
Well that's a poor article. Not used to that from LWN at all. I think this sentence sums up most of its problems:
> In fact, the pull request for the fix was attached to the report, but that apparently made little difference in the assessment from NVD.
- CVEs are about vulnerabilities, not fixes. If everyone updated their software and all software updated its dependencies, I'm not sure we needed a vulnerability tracking system at all! The fix being available doesn't change the impact or exploitability of the bug while you run the out-of-date software
- The text makes it sound like ratings given are opinions (elsewhere it talks about NVD having a "change of heart"). NVD uses a calculation system called CVSS where you input parameters like whether the vulnerable component can be reached over a network, whether the vulnerability results in denial of service or in confidentiality being breached or whatnot. These are objective parameters, and while interpretation mistakes are extremely common¹, ending up with a vastly different score is rather difficult if you have all relevant information to fill it out in the first place (also because mistakes are likely to cancel out). The article acts as though it's their subjective assessment, but rather, the score changes with new facts being presented. (In an ideal world, of course. In practice they typically forget or aren't told the correct info. But apparently here they were told and did update, so that's good right?)
The article then goes on to state that curl uses its own subjective system now instead and they gave some bug an entirely different rating. The article does not say anything towards who had the more correct result but then goes with the assumption that curl's own subjective rating is better. Is that really what we want, when complaining that NVD gives bad ratings? More subjectivity, intransparency about how they got to a particular result?
Anyway I'm neither loving the CVE system nor CVSS ratings either, but these are the wrong reasons
Yet a narrow security view cantered around single faults within
single programs misses a lot.
Take backdoors, which are neither errors nor bugs, nor may be seen in
the wild, but are simply vulnerabilities in waiting, working as
expected by design.
Or a set of mere errors may, in concert, lead to a systemic and
commonly exploited vulnerability.
Giving an identifier to a reproducible observation, in some context
and time, is a good start. But to be useful it needs to be part of a
wider, joined-up taxonomy that helps us understand risk better.
It's funny. I was just learning about RuneWatch, which is how the Runescape community attempts to democratize and distribute the problem of dealing with scammers.
The evolution was basically that it started the way CVEs work. You could submit a report and there wasn't really any verification. Of course, this was abused - far worse than the CVE system, naturally, but the same sort of problem.
Now they require a lot of evidence and things are much much better and abuse is far more difficult.
It seems sort of obvious. If this is a vulnerability, prove it.
It almost feels like a CVE should have a "practical" tag. Like:
1. Is this theoretically exploitable?
2. Is this practically exploitable?
3. Is this proven exploitable?
This should be a massive contributor to the score. It also means that a score can be raised by having the sec researcher demonstrate an exploit - so disagreements can be resolved very easily.
Related, here's a post from Spender on the topic (since the "CVE bad" situation is nothing new)
Can relate. Dealing with "ReDOS" (regex DOS) CVEs used solely to win bounties from certain sites, paired with overly hostile and secretive "researchers", has turned me off from the whole CVE system entirely.
The solution at my workplace is a bot that opens PRs to bump dependencies and automatically merges if the tests pass.
It's taken a lot of workload off devs to meet security targets. But I worry it makes supply-chain attacks more attractive. If an attacker can compromise a package and it's instantly merged into the codebases of thousands of different companies that's a huge danger.
I've been building Packj [1] to detect dummy, malicious, abandoned, typo-squatting, and other "risky" PyPI/NPM/Ruby/PHP/Maven/Rust packages. It carries out static/dynamic/metadata analysis and scans for 40+ attributes such as num funcs/files, spawning of shell, use of SSH keys, network communication, use of decode+eval, etc. to flag risky packages. Packj Github action [2] can alert if a risky dependency is pulled into your build.
I don't know if you all know this but you don't have to fix CVEs. If the software author won't issue a fix, you are supposed to have an exemption process to accept the non-existent risk.
In my view, the CVE system is too strict, it needs more noise and low severity vulns that are hard to pull off in the real world. It showd the scrutiny the software has gotten and seeing a lot of such vulns shows the software has good and secure design. Contrary to public sentiment, not having CVEs does not mean the software is secure, it most likely means (and you should presume) it hasn't gotten proper scrutiny.
The main issue I have seen is how the scoring can be 9 or 10 without a working exploit.
It's a database to help people make decisions not a compliance checklist. You should not misuse the databse alone to measure the how vulnerable you are, you should also be considering risk and impact as well as how fixable it is. But using it as a beating stick is far too easy.
Goodhart's law is in effect. Large corporates only see that the cve exists and has a large number against it. There is no qualitative reading or investigation done. The number is big, therefore you have to fix it... (only helping perpetuate the issue).
That CVE is correct that the “gmon” component of glibc contains memory corruption bugs. In fact, I managed to find even more than the specific one that CVE is about.
But, the claim that this is a security vulnerability is a bit silly. These are profiler functions, which are usually only called in non-production builds with profiling enabled (gcc -pg), and even then only from CRT startup code. It is rather unlikely an attacker can exploit any of these bugs in them without already having the ability to run arbitrary code in the process, at which point they don’t need these functions, they’ve already got all these bugs have to give them.
The personal problems I have with the NVD and CVEs in general are:
- CVEs can be disputed, and this is not based on evidence and can be marked as not affected with as little evidence as "EOL" or end-of-life. See 100% of CVEs filed against SAP, for example. All valid exploits, all work, all are used in the wild by ransomware, and all are still disputed by SAP.
- CVEs are free-form text, which makes even version strings uncomparable. Most projects down the line don't "sanitize to ASCII" and don't know about this. But there are many CVEs (~1%ish) with Chinese idiom symbols as a "." character replacement. Nobody fixes that, ever.
- CVEs and their affected hardware/software matching strings (CPEs) are utterly useless. They are an abstraction for the manual excel spreadsheet world, and literally every linux security tracker implements their own somewhat crappy parser to match against the real upstream projects, their package names and/or the source code origins (e.g. a git repo link).
- CPEs are redundant. The linux kernel itself has more than 200 different CPE strings, and none of them are recorrected after they have been filed by the reporting person. What the actual duck. Have fun trying to find out if Arch Linux is affected when the initial CVE was filed against, say, SUSE or Oracle Linux.
- Most hardware vendors wait to confirm or deny anything until their hardware is EOL. Smart, right? Nobody knows so it must be secure, right? Looking at you, Cisco, Fortinet and the like. And now you are wondering why exploits from 2016 are still used and shared in Telegram channels of ransomware developers?
- There is no mandatory PoC in the CVE database. This could change so much in the ecosystem, especially with the hardforked still-patched feature frozen packages of distributions like Debian. Debian's security tracker is littered with "marked as fixed" but actually "can't fix it because code diverged too much from upstream already" CVEs. And guess what, those are exactly those exploits used in the wild by root/bootkits.
- As the article author states, CVEs are never recorrected. This is the biggest issue of all of them. Nobody maintains a sane state of the database, even the new git repositories are buggy as hell and don't validate against their own schema. Nobody fixes those CVEs "upstream" and nobody maintains old CVEs that turn out to affect more upstream projects. Prime example is log4j, because literally almost all java software is affected when they bundle(d) it; which almost all of them do. I complained to this but basically can't or won't be fixed [2], [3], [4] because even the new schemas are experimental (for over 8 years now).
Source: am scraping literally all linux security trackers to make a better CVE database for automatic comparison purposes. And it's a super ducked state of affairs. [1]
Do you have any thoughts on CVSSv4[0]? It appears to incorporate finer-grained and organization-specific scoring to address issues many have with the one size fits all approach currently used for CVEs.
Most vendors that I am scraping already have a confidence score, which is approximated on a statistical level. For example, can't trust the fixed states of Ubuntu and Debian, so they got a lower confidence score; compared to say, Arch Linux which has the highest confidence in that regard.
Matching package names overall is what I was using the CPEs for initially, but it's way too much overhead to match those in a separate database/table.
CVE scoring is parasocial activity. Hence so much drama.
Similarly to SemVer, the good-faith grader attempts to convey a sizeable blob of knowledge... by compressing it into a one-dimensional number. No matter the scoring formula, this step is lossy.
On the receiving end of this communication, all you can do with the score is add a huge grain of salt to it, then perhaps use to prioritize your review queue. You still must check the details, and work out a judgement tailored to your specific context. There's no other way.
There isn't a choice for the grader either, to skip the obscenely lossy scoring step. Just like with release versions, they must do it, as the audience consists of unbounded number of engineers; faithfully doing it saves mountains of time for everyone involved (present and future).
Just like with dependency upgrades, it's the consumer's choice to disregard CVE scores (version numbers), vulnDB entries (changelogs), or even existence itself of a vulnerability (upgrade). Likewise, it's their fault if consequences arise.
Viewed thusly, can be seen: anecdotes of pointwise drama will continue (even when the bulk of activity chugs along happily, efficiently and quietly) -- because at the core of it, CVE ID's and scores are just that, a communication tool. It mostly can't make strangers exercise care or spend effort more than they're willing to. It can optimise utility of attention that they do pay.
> Various CNAs assign CVE numbers for their own products (e.g., Microsoft, Oracle, HP, Red Hat)
I wonder if this is part of the problem. The big players aren't hurt by this problem because they get to decide what's a real CVE for themselves, so there's little commercial push for change.
So, uh, what's the problem? CVEs are just meant to ensure that we're all talking about the same bug when we're talking about a bug. If you're making a big deal about a CVE existing for a bug you don't care about... that's on you?
This is the problem: "Service-level contracts and governmental requirements mean that a critical CVE needs to be addressed in short order, so non-critical bugs that get marked that way can cause real problems."
You are right what CVEs are, but there are plenty of people who think CVEs are something different.
You are describing the problems related to Vogons who assume all bureaucratic inputs are perfect. All tickets should have an escape hatch where some person with direct experience with the software can report false positives or “not an issue”.
Those are different than the purpose of the article, which is describing the variance of quality of CVE data.
Not to mention the completely trash cyber risk ratings industry.
If you score poorly in because of their absurd scoring methods, it will potentially impact revenue streams if you have to deal with vendor risk departments, etc who don't understand any of the underlying mechanics
Because of incoming CRA and existing requirements, having CVE causes wide, lasting, high impact consequences for people maintaining software and hardware products, infrastructure and services.
Since barrier of entry for CVE is as low as "logged in user can crash system by filling disk/ram, can cause data leak i think" and that thing can get assigned high marks depending on the randomness of this whole process, you are in a situation of "one bogus report to cripple all users of open source library"
Who is more likely to be able to score a CVE accurately — a nameless faceless CVE triager at NIST (or similar organization) who is working as a liaison between the reporter and the maintainer… or the maintainer himself?
The CVE is a pure record. It has not guarantee of accuracy and should be treated with this knowledge. That’s what tptacek’s comment is saying. People who attach all sorts of contractual requirements on the assumption that every CVE is perfectly reported and recorded are setting themselves up for inevitable garbage-in, garbage-out situations.
Also, if you’re curious, you should check out his post history. He’s kinda famous on HN in the security space.
Yes, please, tell all those auditors, security scanner vendors and journalists that they should treat CVEs with this knowledge. That having report with "1 critical vulnerability (CVE-2082-555) !!!!" does not mean that you have a security problem, and your equipment vendor has to drop everything and go fix that.
You are telling me what CVE is.
I am telling you what the effect of having CVE is.
I mean, do tell them that because you're right. CVEs are not broken, your auditors, security scanner vendors and journalists are. Sounds like if you replace CVEs with another system you'll get exactly the same mess.
Funnily enough the current webp situation is the perfect counterpoint and sort of speaks to the fairly unregulated way CVEs are assigned. Multiple CVEs for the same core issue playing havoc and hampering remediation. A more robust system would indeed promise that we're all talking about "the same bug"; we aren't, currently.
Then there's the erroneous CVEs closer to what OP is talking about, like more recent CVEs registered for deliberately vulnerable apps like OWASP Juice Shop etc. In practice, the current system gets a lot right, but its lack of scrutiny creates issues.
I agree, there's a lot of confusion between a CVE and CVSS. One is just an identifier, the other is its score. I am much more of a fan of using CVEs than assigning arbitrary marketing terms like xBleed.
I am in favor of the severity system being entirely removed.
Attempting to score the vast majority of vulnerabilities against a universal axis is comical and falls apart as soon as you try any kind of real world application.
I guess the problem is that people use automated scanning tools, see CVE:s with high scores and then they go and put pressure on FOSS maintainers about stuff that’s a) not urgent b) doesn’t have a high impact?
For anyone working with CVE and other advisories there's a cool search engine and feed for those - Vulners[1]. They have integration with various providers and tools, as well as allow to search also for exploits among numerous blogs.
I feel like there are a lot of CVE's in the JS community that basically require an attacker to already have access to server side code execution. That seems like it raises some bigger questions.
It is not just a CVE problem. Many sec standards feels really random and difficult to work with. You can get better scores on some of these sec scans if you open up some ports in the firewall.
Step 1 would be to not treat vulnerability == CVE. It's a simple change in terminology but we need to make sure to communicate that CVEs are just one, probably the most popular, but just one source of vulnerabilities.
Governance and security teams the world over need to learn this.
Next step ist to establish good tooling around publishing VEX statements (Vulnerability Exploitability Exchange). That is currently not easy.
In the EU we currently have the discussions about the upcoming Cyber Resilience Act which has language in it that says products must be "free of known vulnerabilities" to be allowed to be placed on the European market. We're trying to change that language but it ain't trivial. The current best proposal is saying "known _exploited_ vulnerabilities".
Otherwise I'd be able to DoS my competitors by publishing bogus vulnerabilities.
For that we have at least two sources: CISA KEV[1] (which just celebrated its thousands entry) and FIRST EPSS[2]. There might be more, these are the ones I'm aware of.
We are a software vendor and we want to do the best we can but at the moment it's a box ticking exercise that's not useful to anyone. We want to look at all vulnerabilities but we want to focus on the important ones. We need to be able to say "our product is not affected by this vulnerability" and we need our customers to trust us. Currently, they often trust the CVE/NVD database more which is a huge problem because they are not experts in the specific products.
And especially for things like libraries you need to take the context into account in which it is used. Vulnerabilities are reported (using CPE, pURL etc.) at the "library" or "application" level but they really exist at a much more granular level (e.g. a single function is affected and if that's not used there's no problem).
I'm convinced that the whole space of vulnerability disclosure and management will change significantly over the next few years.
We happened to stumble into this and are now active in trying to shape the Cyber Resilience Act into a form that makes more sense because that will force basically every European company to now start this process. Other countries will follow (yes I know the US already has rules for some sectors but not everything, so do other countries).
It's not only that the CVE database / process is broken, but via EO 14028 in the U.S. and CRA in Europe transparency is mandated via SBOMs. While I believe this transparency is good, it can be abused to enforce compliance-driven security: Fix all critical, high and medium CVEs within well defined time frames. PCI DSS and many other standards kind of encourage that view already today. It will then just be measurable by outside parties, which then means the limited security budget will be used to "fix" things that don't matter as much.
And I agree with you, Lars: We should be using CISA's KEV, First's EPSS and other means. But I'm not sure software customers would accept seemingly higher risk (high number of unfixed critical / high CVEs), even if the EPSS suggests overall much lower risk.
I've written in longer form at [1] about this issue.
> Next step ist to establish good tooling around publishing VEX statements (Vulnerability Exploitability Exchange). That is currently not easy.
Agreed.
Tooling is needed for creating/publishing/consuming both VEX statements for applications (i.e. exploitability of a dependency in context), but also VEX statements from library authors (many times the actual experts, like the OP) as a way dispute the weakness/exploitability.
Currently working hard on this [1]. When the tooling is in place the next step for the industry will be discoverability of SBOM/VEX/VDR attestations. Sigstore/Rekor [2] looks to be a viable alternative here.
> Vulnerabilities are reported (using CPE, pURL etc.) at the "library" or "application" level but they really exist at a much more granular level (e.g. a single function is affected and if that's not used there's no problem).
Again, agreed. Reachability info needs to be commoditized, standardized and shared. The current situation where it's a feature used by vendors to compete is not ideal.
Do you have any thoughts on CVSSv4[0]? It appears to incorporate finer-grained and organization-specific scoring to address issues many have with the one size fits all approach currently used for CVEs.
This already exists today where you can do custom scoring and some companies (e.g. Red Hat) already do so.
CVSSv4 fixes some things, yes, but not the underlying issue which isn't so much a technical challenge (partially, sure) but a shift in policies and thinking.
The current model of "we need to get to 0 vulnerabilities in our scans" will lead to malicious compliance[1] and worse results compared to being able to focus on the few vulnerabilities that are really important.
At least that's my very strong opinion.
From the article “The report was for a legitimate bug, where the ‑‑retry‑delay option value was being multiplied by 1000 (to milliseconds) without an overflow check. But what it was not was a security bug, Stenberg said; giving insanely large values for the option might result in incorrect delays—far shorter than requested—but it is not a security problem to make multiple requests in a short time span. If it were, "then a browser makes a DOS [denial of service] every time you visit a website — and curl does it when you give it two URLs on the same command line", he said in a followup post.”
I agree there’s plenty of junk in the cve database but devils advocate: ‘a system relies on a user supplied sleep to trigger a web request which executes some function which if execute too quickly may cause double transaction execution or similar.
Almost any functional bug (and arguably some UI bugs) _can_ have security or real world impacts under some scenario. CVE cannot generically say that any bug does not have security implications, noting that not all security bugs require a threat actor.
A data breach is bad, regardless of if it’s a hack or if it’s somebody accidentally leaving a pile of printed PII on a bus, even if most piles of paper left on busses are just trash.
CVE-2020-19909 is everything that is wrong with CVEs(74 comments) https://news.ycombinator.com/item?id=37267940
and
Bogus CVE Follow-Ups (7 comments) https://news.ycombinator.com/item?id=37394919