Hacker News new | past | comments | ask | show | jobs | submit login
U.S. Government Disclosed 39 Zero-Day Vulnerabilities in 2023, First-Ever Report (zetter-zeroday.com)
221 points by jc_811 82 days ago | hide | past | favorite | 148 comments



I hope this signals a turning point and lessons learned from the historic practice of hoarding exploits in the hopes they can be weaponized.

when you disclose vulnerabilities and exploits, you effectively take cannons off both sides of the metaphorical battle field. it actively makes society safer.


Governments who want power will hoard the knowledge, which is power. Other governments will share. This is perpetual tension: we collectively receive utility when good policy is active (rapid dissemination of vuln info), but we need third parties to seek these exploits out when government cannot be relied on. Very similar to the concept of journalism being the Fourth Estate imho.

(vuln mgmt in finance is a component of my day gig)


You can't hoard knowledge, just like you can't take it away.


Huh? You absolutely can hoard knowledge and you can most certainly purge knowledge as well.


Only predictive capabilities & AI trained on data can be more valuable than having perfect profile of the person who turns out to be an enemy of your nation in whatever sense. Taking a person down once you know everyone they have ever talked, been interested about, or thought, is trivial. This can only be acheived by mass surveillance & hoarding.

Arguably US as a nation state has the very best LLMs in the world, which is why I personally think they have been running weak AGI for few years, e.g. for autonomous malware analysis, reverse-engineering, and tailored malware generation & testing capability. Because they can actually store the personal data long term, without habing to delete it, this may be of gigantic strategic advantage due web being highly "polluted" after 2020-2021.

I would from this guess US has bet on AI research since end of WWII, and especially within last 30 years, noting the rather highly remarkable possibility that Surveillance Capitalism is actually part of the nation's security effforts. The warehouses of data they have built are warehouses of gold, or rather, gold mixed in sand since of course lots of it is also garbage.


Has the US done anything that huge and kept it secret? The Manhattan project is the only one I can think of, and that involved sequestering the entirety of the nations top physicists, chemists, and metallurgists in the desert and employed 130,000 people. This was before the internet and during a time when Americans were unified in a war effort in a way that hasn't existed before or since.


This is a little bit like talking about why they hoard the guns. The reason governments have caches of exploit chains is not hard to understand.


It is substantially different from hoarding guns: not hoarding exploits takes away those exploits from adversaries.

If an important factor is the ratio of exploits A and B have, then publishing their hidden but common exploits the ratio does not remain the same.

The ratio is interesting because potential exploitation rate is proportional zero days (once used, the "zero" day is revealed and remediated after a certain time span).


You're the expert, and certainly not wrong, I wrote my comment because I have had to explain this to folks in a professional capacity and thought it might be helpful.


It's just a rhetoric thing. We're talking about the USG "hoarding" stuff, but, in a sense, the government has a conceptual monopoly on this kind of coercive capability. Anybody can find vulnerabilities and write exploits, but using them in anger and without mutual consent is an authority exclusively granted to law enforcement and intelligence agencies. This is just an extension of the "monopoly on violence".

The longstanding objection to this is that secretly holding a gun doesn't intrinsically make everyone less safe, and there's a sense in which not disclosing a vulnerability does. That argument made more sense back in and before 2010; it doesn't make much sense now.


Most likely these vulnerabilities were known by adversaries and they decided to report these to make it more difficult for those adversaries to attack.

I’m sure the really juicy zero days they’ve discovered in-house are kept out of reports like these


This is the most likely scenario. Its not like the government has decided they no longer need to hang on to zero days to use against adversaries.

They've just determined these ones are either no longer useful to them, or adversaries have discovered and began using them.


It literally is the scenario, as really the only outcome of the VEP (for serious, "marketable" vulnerabilities) is "disclose once burned".


The US depends on exploits being available for the companies it uses to circumvent the 4th amendment.


The problem is when they don't know that their adversaries know about those exploits...

There's a lot of arrogance and hubris with the idea of NOBUS, and they often make things worse assuming only they know...


> when you disclose vulnerabilities and exploits, you effectively take cannons off both sides of the metaphorical battle field. it actively makes society safer

If I know you always disclose, and I find something you haven't disclosed, I know I have an edge. That incentivises using it because I know you can't retaliate in kind.

The hoarding of vulns is a stability-instability paradox.


How would you ever know that someone always discloses, if you can't know what they don't disclose?


You can’t know (in the mathematical certainty sense) that they always disclose. But you can know if some entity has the policy of always disclosing. Those are two different things. A policy is about the intentions and the structure of the organisation. How they think about themselves, how they train their recruits and how they structure their operations.

The first hint would be the agency stating that they have a policy of always disclosing. You would of course not believe that because you are a spy with trust issues. But then you would check and hear from all kind of projects and companies that they are receiving a steady stream of vulnerability reports from the agency. You could detect this by compromising the communications or individuals in the projects receiving the reports, or through simple industrial rumours. That would be the second hint.

Then you would compromise people in the agency for further verification. (Because you are a spy agency. It is your job to have plants everywhere.) You would ask these people “so what do you do when you find a vulnerability?” And if the answer is “oh, we write a report to command and we sometimes never hear about it again” then you know that the stated policy is a lie. If they tell you “we are expected to email the vulnerable vendor as soon as possible, and then work with them to help them fix it, and we are often asked to verify that the fix is good” then you will start to think that the policy is actually genuine.


> How would you ever know that someone always discloses

Same way you know if they don't have nukes. Based on what they say and your best guess.


It is not that turning point. These are VEP vulnerabilities. Like every major government, the US will continue to do online SIGINT.


I'd be surprised if the policy continues.

Or if the people who worked at the agency are still there.


They are not: https://techcrunch.com/2025/01/22/trump-administration-fires...

Also, not a joke, this program contains the word "equity" ("the Director of National Intelligence is required to annually report data related to the Vulnerabilities Equities Process") so it will probably be frozen or cancelled.


[dead]


We've banned this account for using HN primarily (exclusively?) for political/ideological/national battle. That's not allowed here, regardless of what you're battling for or against.

https://news.ycombinator.com/newsguidelines.html


I doubt it. Historically, most government agencies around the world have had appalling security and each iteration is just as bad as the previous with a few half-assed patches on top to cover the known holes.


I might be a contrarian, but I think it makes sense for the NSA to hoard 0-days. They should disclose only after they burn them.


You're only a contrarian on message boards. The economics of CNE SIGINT are so clear --- you'd be paying integer multiples just in health benefits for the extra staff you'd need if you replaced it --- that vulnerabilities could get 10x, maybe 100x more expensive and the only thing that would change would be how lucrative it was to be a competitive vuln developer.

A lot of things can be understood better through the lens of "what reduces truck rolls".


Any 0-day found by an NSA employee can and will be found by someone else, and then sold or used.


In theory, I agree. In practice, how do you explain how NSO Group kept the secret for the 0-day exploit for WhatsApp remote execute for its Pegasus product? Or do I misunderstand Pegasus? Maybe it isn't a one trick pony, but a platform to continuously deliver 0-day exploits.


The VEP is literally based on that premise.


Right, I think we agree?


Well, I'm just saying: the preceding comment believed themselves to be a contrarian for thinking it was OK for NSA to have these vulns, and it looked like you were rebutting them. But your rebuttal, if that's what it is, restates a premise NSA shares.


The meaning I took from that comment is that the NSA should always keep any 0-day it finds indefinitely until it uses them. That's what I think "hoard" and "disclose only after they burn" means. It's a mindset of keeping everything you find secret because you might want to use it someday.

My understanding of VEP is that the default is supposed to be to disclose immediately unless an agency has a really good reason not to, presumably because they want to use it soon. Don't hoard, only keep things that you're actually going to use.


For clarity: the word "hoard" to me signals the idea that NSA keeps dozens of duplicative exploit chains around. The public policy rationale for them doing that is to me clear, but my understanding is that the practical incentives for them to do that aren't clear at all.

When I say "burn", I mean that something has happened to increase the likelihood that an exploit chain is detectable. That could be them being done with it, it could be independent discovery, it could be changes in runtime protections. It's not "we use it a couple times and then deliberately burn it", though.

We should stop talking about "NSA", because this is basically a universal LE/IC practice (throughout Europe as well).


The NSAs charter should be to secure this country and not attack others.


That organization exists, and it is called the FBI.


I mean, NSA has a directorate for it too, but it's not the agency's chartered purpose.


That is literally the opposite of why NSA exists.


I think "literally the opposite" is incorrect.

Their charter was to take over COMINT from the military into a new joint administrative agency. Their work includes securing communications between allies as much as it does breaking communications from our belligerents. This work necessarily provides them with the types of knowledge and experience that would be highly valuable in securing infrastructure against similar attacks.

Likewise they've involved themselves in both improving and harming civilian cryptography systems and research for years. They're a complex agency and the value of COMINT can be realized under many different root paradigms. It's absurd to think otherwise.


NSA was chartered to break the codes of foreign powers.


That's one aspect of COMINT. I use this term because NSA's charter, such that it exists, uses this term.


Their primary mission is intrinsically offensive and always has been.


You can't actually hoard them, though. They aren't objects, they are knowledge.

A 0-day is present in every instance of the software it can exploit.


This is a meaningless distinction imo

You hoard knowledge by writing it down somewhere and then hoarding the places it's written down. Whether that's books, microfilm, hard drives, what have you


You can't stop someone else from writing it down.

When you hoard something, your possession of that thing effectively takes access to that thing away from everyone else.

You can't keep access to a vulnerability away from anyone!



That's definitely the downside in the trade-off, yeah. If you're going to hoard you better also protect or you just get the worst of all worlds. Still, I am generally hopeful about our intelligence agencies' ability to prevent leaks, even if fuckups have occurred.


That's not the real downside. If it were, you'd be seeing mini-"shadow brokers" leaks every month, because the practice we're talking about here is extremely widespread: the economics of being a zero-day broker hinge on being able to sell the same exploit chain many, many times to the same country, and to get recurring revenue each such sale.

The real downsides here are probably economic and have to do with how this shifts incentives for everybody in the industry. But, at the same time, every big tech company with a desktop/mobile footprint has invested mightily on staff to counter LE/IC/foreign CNE, which is something that might not have happened otherwise, so it's all complicated.

People write as if the disclosure of a bunch of IC zero days is like some kind of movie-plot "Broken Arrow" situation, but it's really mostly news for message boards. Organizations that need to be resilient against CNE are already in a state of hypervigilance about zero-days; adversaries absolutely have them, no matter what "NSA" does.


Yeah, because intelligence is famously a discipline where nothing ever goes wrong.


OK, hoarding discovered zero-days might not be the best strategy, BUT if we actually create a backdoor and don't tell anyone about it, then this should be safer right? right? /s

https://www.wired.com/2015/12/researchers-solve-the-juniper-...

https://en.wikipedia.org/wiki/Dual_EC_DRBG

https://en.wikipedia.org/wiki/Juniper_Networks#ScreenOS_Back...


The lesson is not in vulnerability management.

The lesson is that our desktop software is garbage and the vendors are not properly held to account.


Probably not, trump's first term he was all for allowing ransomware. And the only reason we started seeing a strategy for mitigating was because of Biden. Since trump is all in on crypto and the fact that russia is the main beneficiary of ransomware, I highly expect cybercrime to ramp up as the current admin is positioned to benefit directly.


Aint no way.

All major governments hoard 0days or buy them to use for espionage. I dont see this being some kind of "turning point" and more of a feel good easy PR win for the US gov but really they are still using many 0days to spy.


Yeah, this is more like "these vulnerabilities are no longer useful to us" or "adversaries have discovered these and began using them, so here you go."


Burning 0-days makes your enemies spend more time on finding new ones - costs rise so they will go bankrupt. Cold war 2.0. It's not enough to just run grep / memcpy finder on software like 20-15 years ago.


There is no such thing as a "Nobody But Us" vulnerability. Leaving holes in systems and praying enemies won't discover them, with the hope of attacking them ourselves is extremely foolish.


CNE "zero-day" isn't "NOBUS", so you're arguing with a straw man.


I mean there are certainly vulnerabilities that are substantially asymmetric.


I've seen the invite-only marketplaces where these exploits are sold. You can buy an exploit to compromise any piece of software or hardware that you can imagine. Many of them go for millions of dollars.

There are known exploits to get root access to every phone or laptop in the world. But researchers won't disclose these to the manufacturers when they can make millions of dollars selling them to governments. Governments won't disclose them because they want to use them to spy on their citizens and foreign adversaries.

The manufacturers prefer to fix these bugs, but aren't usually willing to pay as much as the nation states that are bidding. All they do is drive up the price. Worse, intelligence agencies like the NSA often pressure or incentivize major tech companies to keep zero-days unpatched for exploitation.

It's a really hard problem. There are a bunch of perverse incentives that are putting us all at risk.


> It's a really hard problem

Hard problems are usually collective-action problems. This isn't one. It's a tragedy of the commons [1], the commons being our digital security.

The simplest solution is a public body that buys and releases exploits. For a variety of reasons, this is a bad idea.

The less-simple but, in my opinion, better model is an insurance model. Think: FDIC. Large device and software makers have to buy a policy, whose rate is based on number of devices or users in America multiplied by a fixed risk premium. The body is tasked with (a) paying out damages to cybersecurity victims, up to a cap and (b) buying exploits in a cost-sharing model, where the company for whom the exploit is being bought pays a flat co-pay and the fund pays the rest. Importantly, the companies don't decide which exploits get bought--the fund does.

Throw in a border-adjustment tax for foreign devices and software and call it a tariff for MAGA points.

[1] https://en.wikipedia.org/wiki/Tragedy_of_the_commons


I think what is actually the problem is the software and hardware manufacturers.

Secure use of any device requires a correct specification. These should be available to device buyers and there should be legal requirements for them to be correct and complete.

Furthermore, such specifications should be required also for software-- precisely what it does and legal guarantees that it's correct.

This hasn't ever been more feasible, also considering that we Europeans are basically at war with the Russians, it seems reasonable to secure our devices.


We have already have that: ISO 15408, Common Criteria [1]. Certification is already required and done for various classes of products before they can be purchased by the US government.

However, large commercial IT vendors such as Microsoft and Cisco were unable to achieve the minimum security requirements demanded for high criticality deployments, so the US government had to lower the minimum requirements so their bids could be accepted.

At this point, all vendors just specify and certify that their systems have absolutely no security properties and that is deemed adequate for purchase and deployment.

The problem is not lack of specification, it is that people accept and purchase products that certify and specify they have absolutely zero security.

[1] https://en.m.wikipedia.org/wiki/Common_Criteria


Yes, but consumers buy, for example, graphics cards with binary blobs and are certainly not sent a specification of the software in them, or of the interfaces, etc. and that is what I believe is the absolute minimum foundation.

So I mean an internal specification of all hardware interfaces and a complete description of software-- no source code, but a complete flow diagram or multi-process equivalent.


> These should be available to device buyers and there should be legal requirements for them to be correct and complete

You're still left with a massive enforcement problem nobody wants to own. Like, "feds sued your kid's avourite toy maker because they didn't file Form 27B/6 correctly" is catnip for a primary challenger.


That's an incredibly tough sell, particularly for software. Who is it that should "require" these specifications, and in what context? Can I still put my scrappy code on Github for anyone to look at? Am I breaking the law by unwittingly leaving in a bug?


Yes, but you wouldn't be able to sell it to a consumer.

They way I imagine it: no sales of this kind of thing to ordinary people, only to sophisticated entities who be expected to deal with the incompletely specified source code, so if a software firm wants to buy it that's fine, but you can't shrink wrap it and sell it to an ordinary person.


Modern software is layers upon layers of open-source packages and libraries written by tens of thousands of unrelated engineers. How do you write a spec for that?


A tragedy of the commons occurs when multiple independent agents exploit a freely available but finite resource until it's completely depleted. Security isn't a resource that's consumed when a given action is performed, and you can never run out of security.


> Security isn't a resource that's consumed when a given action is performed, and you can never run out of security

Security is in general non-excludable (vendors typically patch for everyone, not just the discoverer) and non-rival (me using a patch doesn't prevent you from using the patch): that makes it a public good [1]. Whether it can be depleted is irrelevant. (One can "run out" of security inasmuch as a stack becomes practically useless.)

[1] http://www.econport.org/content/handbook/commonpool/cprtable...


>Security is [...] a public good

Yeah, sure. But that doesn't make it a resource. It's an abstract idea that we can have more or less of, not a raw physical quantity that can utilize directly, like space or fuel. And yes, it is relevant that it can't be depleted, because that's what the term "tragedy of the commons" refers to.


> it is relevant that it can't be depleted, because that's what the term "tragedy of the commons" refers to

I think you're using an overly-narrow definition of "tragedy of the commons" here. Often there are gray areas that don't qualify as fully depleting a resource but rather incrementally degrading its quality, and we still treat these as tragedy of the commons problems.

For example, we regulate dumping certain pollutants into our water supply; water pollution is a classic "tragedy of the commons" problem, and in theory you could frame it as a black-and-white problem of "eventually we'll run out of drinkable water", but in practice there's a spectrum of contamination levels and some decision to be made about how much contamination we're willing to put up with.

It seems to me that framing "polluting the security environment" as a similar tragedy of the commons problem holds here, in the sense that any individual actor may stand to gain a lot from e.g. creating and/or hoarding exploits, but in doing so they incrementally degrade the quality of the over-all security ecosystem (in a way that, in isolation, is a net benefit to them), but everyone acting this way pushes the entire ecosystem toward some threshold at which that degradation becomes intolerable to all involved.


> It's an abstract idea that we can have more or less of, not a raw physical quantity that can utilize directly, like space or fuel

Uh, intellectual property. Also land ownership is an abstract idea. (Ownership per se is an abstract idea.)


Land is obviously a finite resource. I don't know what point you're trying to make with regards to intellectual property.


> don't know what point you're trying to make with regards to intellectual property

Stocks. Bonds. Money, for that matter. These are all "abstract idea[s] that we can have more or less of, not a raw physical quantity." We can still characterise them as rival and/or excludable.


security maybe considered "commons" but accountables are individual manufacturers. If my car is malfunctioning I'm punished by law enforcement. There are inspections and quality standards. Private entities may provide certifications.


Please no more mandated insurance programs.


insurers can be quite good at enforcing quality standards


The markets here are complicated and the terms on "million dollar" vulnerabilities are complicated and a lot of intuitive things, like the incentives for actors to "hoard" vulnerabilities, are complicated.

We got Mark Dowd to record an episode with us to talk through a lot of this stuff (he had given a talk whose slides you can find floating around, long before) and I'd recommend it for people who are interested in how grey-market exploit chain acquisition actually works.

https://securitycryptographywhatever.com/2024/06/24/mdowd/


Makes me wonder if there are engineers on the inside of some of these manufacturers intentionally hiding 0 days so that they can then go and sell them (or engineers placed there by companies who design 0 days)


People have been worrying about this for 15 years now, but there's not much evidence of it actually happening.

One possible reason: knowing about a vulnerability is a relatively small amount of the work in providing customers with a working exploit chain, and an even smaller amount of the economically valuable labor. When you read about the prices "vulnerabilities" get on the grey market, you're really seeing an all-in price that includes value generated over time. Being an insider with source code access might get you a (diminishing, in 2025) edge on initial vulnerability discovery, but it's not helping you that much on actually building a reliable exploit, and it doesn't help you at all in maintaining that exploit.


good vulnerability / backdoor should be indistinguishable from programming mistake. Indirect call. Missing check on some bytes of encrypted material. Add some validation and you will have good item to sell no one else can find.


See: second paragraph above.


Are we just straight up ignoring the Jia Tan xz exploit that happened 10 months ago that would've granted ssh access to the majority of servers running OpenSSH?, or does that not count for the purposes of this question, because that was an open source library rather than a hardware manufacturer?


Is there any evidence the author of this backdoor was able to sell it to anyone, for any kind of money?


> It's a really hard problem.

Classify them as weapons of mass destruction. That's what they are. That's how they should be managed in a legal framework and how you completely remove any incentives around their sale and use.


How about some penalties for their creation? If NSA is discovering or buying, someone else is creating them (even if unintentionally).

Otherwise corporations will be incentivized (even more than they are now) to pay minimal lip service to security - why bother investing beyond a token amount, enough to make PR claims when security inevitably fails - if there is effectively no penalty and secure programming eats into profits? Just shove all risk onto the legal system and government for investigation and clean up.


> weapons of mass destruction. That's what they are

Seriously HN? Your Netflix password being compromised is equivalent to thermonuclear war?


Think more along the lines of exploits that allow turning off a power grid, spinning a centrifuge too fast, or releasing a dam.


> exploits that allow turning off a power grid, spinning a centrifuge too fast, or releasing a dam

By this definition trucks are WMDs because they, too, can blow up a dam.

Hyperbolic comparisons undermine the speaker’s authority. Zero Days aren’t WMDs.


That is never, ever going to happen, and they are nothing at all like NBC weapons.


Yes. Except our government is the largest buyer.


The USA has 5044 nuclear missiles, so that shouldn't be a problem.


Suddenly I felt like re-reading Ken Thompson’s essay Reflections on Trusting Trust.

We’ve created such a house of cards. I hope when it all comes crashing down that the species survives.


Instead of hoping, you can do a lot just by ditching your cell phone and using Debian stable.


Ah yes, switching from an iPhone to Debian is sure to… checks notes save the species from extinction.

Apologies for the dismissive snark; perhaps you could provide me some examples of how this would help?


reminds me of the anthropic claude jailbreak challenge which only pays around $10,000. if you drive the price up, i'm pretty sure you'll get some takers. incentives are not aligned.


These are just the disclosed ones. The weaponized ones (as mentioned) found or bought kept secret by the NSA, etc. such as from Zerodium (ex-VUPEN) and similar aren't counted obviously. ;)


It's a "tell" in these discussions when they center exclusively on NSA, since there are dozens of agencies in the USG that traffick in exploit chains.


So there was 39 vulnerabilities that affected government systems. The rest didn't so they had no need to disclose.


Similar, but my thought is that they found out some other gov(s) know about it as well. And that it hurts others more than it hurts the US gov.


I'd say that's a very cynical take, theres a verification process and they disclosed 90% of them, pretty generous gift to the world if you ask me. They do not have a moral mandate to use their resources to benefit all.


"What the government didn't reveal is how many zero days it discovered in 2023 that it kept to exploit rather than disclose. Whatever that number, it likely will increase under the Trump administration, which has vowed to ramp up government hacking operations."

This is a bit of a prisoner's dilemma. The world would be better off if everyone disclosed every such exploit for obvious reasons. But if government A discloses everything and government B reserves them to exploit later, then government B has a strong advantage over government A.

The only responses then are war, diplomacy, or we do it too and create yet another mutually assured destruction scenario.

War is not going to happen because the cure would be worse than the disease. The major players are all nuclear powers. Diplomacy would be ideal if there were sufficient trust and buy-in, but it seems unlikely the U.S. and Russia could get there. And with nuclear treaties there's an easy verification method since nuclear weapons are big and hard to do on the sly. It'd be hard to come up with a sufficient verification regime here.

So we're left with mutually assured cyber destruction. I'd prefer we weren't, but I don't see the alternative.


If Government A and Government B are not equally "good" for the world, then the world is _not_ better off if everyone disclosed, since the main users of CNE are LE/IC.


I'm not sure what some of these initialisms are but the whole idea behind disclosing is to take tools away from the bad guys (whoever you think they are) because presumably they'll have found some of them too.


CNE: the modern term of art for deploying exploits to accomplish real-world objectives ("computer network exploitation").

LE: law enforcement, a major buyer of CNE tooling.

IC: the intelligence community, the buyer of CNE tooling everyone thinks about first.


Disclosing zero-days so the vendor can patch them and declare "mission accomplished" is such a waste.

"Penetrate and Patch" is about as effective for software security as it is for bulletproof vests. If you randomly select 10 bulletproof vests for testing, shoot each 10 times and get 10 holes each, you do not patch those holes and call it good. What you learned from your verification process is that the process that lead to that bulletproof vest is incapable of consistently delivering products that meet the requirements. Only development process changes that result in passing new verification tests give any confidence of adequacy.

Absent actively, or likely actively, exploited vulnerabilitys, the government should organize vulnerabilitys by "difficulty" and announce the presence of, but not disclose the precise nature of, vulnerabilitys and demand process improvement until vulnerabilitys of that "difficulty" are not longer present as indicated by fixing all "known, but undiclosed" vulnerabilitys of that "difficulty". Only that provides initial supporting evidence that the process has improved enough to categorically prevent vulnerabilitys of that "difficulty". Anything less is just papering over defective products on the government's dime.


"Penetrate and patch" is a term of art Marcus J. Ranum tried to popularize 15-20 years ago, as part of an effort to vilify independent security research. Ranum was part of an older iteration of software security that was driven by vendor-sponsored cliques. The status quo ante of "penetrate and patch" that he was subtextually supporting is not something that most HN people would be comfortable with.


"Penetrate and Patch" as a failed security process is distinct from vilifying independent security research, and it should be obvious from my post as I point out that penetration testing is a integral part of the testing and verification process. It tells you if your process and design have failed, but it is not a good development process itself.


Again and tediously: the only reason anyone would use that sequence of words would be to invoke Ranum's (in)famous "Six Dumbest Ideas In Computer Security" post, one of which was, in effect, "the entire modern science of software security".

I recommend, in the future, that if you want to pursue a security policy angle in discussions online with people, you avoid using that term.


I am invoking it. “Penetrate and Patch” is illustrative and catchy and the arguments presented, for that one in particular, are largely theoretically, empirically, and even predictively supported.

In fact, all of points except “Hacking is Cool”, are largely well supported. The common thread being that all the other points are about “designing systems secure against common prevailing threat actors” (i.e “defense”) and only that point is about “development of adversarial capabilitys” (i.e. offense) which they wrongfully undervalue and even associate criminality with; misunderstanding the value of verification processes.

And besides, the entire modern science of software security is, objectively, terrible at “designing systems secure against common prevailing threat actors”. What it is pretty good at is the “development of adversarial capabilitys” which has so far vastly outstripped the former and demonstrates quite clearly that prevailing “defense” is grossly inadequate by multiple orders of magnitude.


If you say so, but I was a practitioner in 1997 and am a practitioner in 2025 and I think you'd be out of your mind to prefer the norms and praxis of circa-1997 software security. You do you.


The proposals were not the norms of the time. It is literally a rant against prevailing “bad” ideas of the time.

“Default Permit”, “Enumerating Badness”, “Penetrate and Patch”, “Educating Users” were the norms of the time and still, largely, are. And that is part of why “defense” is so grossly inadequate against common prevailing threat actors.

I prefer the norms of high security software which include, but do not consist exclusively of, most of the stated ideas.


I'm pretty confident in my assessment of the post you're talking about. Ranum and I go back.


I have no idea what assessment you are even referencing. The only assessments you made as far as I can tell were:

1) The post was made to vilify independent security research.

2) The post argues against "the entire modern science of software security".

3) The norms of 1997 are worse than the norms of 2025 and that my support for the argument that "Penetrate and Patch" is a poor security policy is somehow indicative that I prefer the norms of 1997 even though "Penetrate and Patch" was and continues to be the prevailing norm.

To which I argue:

1) Maybe so. However, the arguments against most of the stated practices, in particular everything except idea #4, stand on their own and have stood the test of time.

2) Given the contents of the rest of that post, this is almost surely a statement that idea #4 was wrong which I have agreed was incorrect. I also, separately, argue that "the entire modern science of software security" is basically useless at producing systems secure against common prevailing threat actors as is demonstrated daily. This is distinct from the high level of capability in producing systems and processes that can identify and exploit vulnerabilitys which is a clear win for the "the entire modern science of software security". However, such capability is not very helpful in achieving the former which is the thing usually desired from "security".

3) I am just baffled. I have to assume that you just misread my position. Otherwise you are arguing that the systems of 1997 were default deny, explicit whitelists, engineered for security from the start, and operate in a secure configuration by default. Or that the security processes of 2025 are that way. Both of which are laughable. I guess you could also argue that those principles are bad, but I am pretty sure even the sorry state of affairs that passes for "software security" these days recognizes those are correct principles even if they utterly fail to even attempt them.


> the government should organize vulnerabilitys by "difficulty" and announce the presence of, but not disclose the precise nature of, vulnerabilitys and demand process improvement until vulnerabilitys of that "difficulty" are not longer present as indicated by fixing all "known, but undiclosed" vulnerabilitys of that "difficulty"

For this amount of bureaucracy, the government should just hire all coders and write all software.


You appear to misunderstand what I am saying:

1) Government already has vulnerabilitys.

2) Government identifies vulnerabilitys they already own by "difficulty to discovery".

3) Government selects the lowest "difficulty to discover" vulnerabilitys they already own.

4) Government announces products with known "lowest difficulty to discover" vulnerabilitys are vulnerable, but does not disclose them.

5) Government keeps announcing those products continue to be the most "insecure" until all vulnerabilitys they already own at that level are fixed.

6) Repeat.


What you're suggesting requires creating a massive federal bureaucracy to continuously survey the product landscape. It then requires the private sector to duplicate that work. This is stupid.


I have no idea how you came to that conclusion.

The government has vulnerabilitys in stock that they already verify function as intended. They already regularly verify these vulnerabilitys continue to function with each update cycle. They already quantify these vulnerabilitys by ease-of-discovery, impact, etc. so they can prioritize utilization. They already determine vulnerabilitys to disclose.

Assuming that the vulnerabilitys they disclose are not entirely "actively under exploit", the only difference in my proposed policy is that they do not disclose the details to the vendors so they can paper over them. Instead, they publicly announce the presence of vulnerabilitys and then keep verifying as they already do until the vulnerabilitys no longer function.

You seem to think I am arguing that the government should create a new organization to look for vulnerabilitys in all software everywhere and then act as I stated.


> they publicly announce the presence of vulnerabilitys and then keep verifying as they already do until the vulnerabilitys no longer function

Yes. This is difficult. Particularly given you need to do it fairly and thus comprehensively.


Then you are arguing with a strawman. I never proposed they do it across all products, only the products that they already have vulnerabilitys in that they already seek to disclose.

You can not change the parameters of my proposal to explicitly require a gigantic bureaucracy then argue that it is a poor idea because the gigantic bureaucracy you added is a problem. You could have argued that my proposal is untenable because it would be "unfair" and the only way to make it fair would be to do it comprehensively which would be too hard.

To which I would state:

1) That means we as a society care more about being "fair" to companies with inadequate software security than demanding adequate software security. Could be the case, but then everything other than roll over and accept it is off-the-table.

2) The government already requires certification against the Common Criteria for many software products in use by the government. You could restrict this policy to just systems that are used and require certification before use. Thus being applied "fairly" to government procurement and incentivizing improvements in security for procured systems.

3) This should actually just be general policy for everybody, but only the government, currently, has enough leverage to really pull off publicly announcing a problem and the vendor not being able to just shove it under the rug.

And, even if you disagree with those points, I am also making the point that the current policy of disclosing the vulnerability so the vendor can make a point-patch to resolve it instead of fixing their overall security process is a failed security policy at the societal level. We need mechanisms to encourage overall security process improvement. Currently, the only thing that does that is the exponentially increasing amount and severity of hacks, and it would be nice to get ahead of it instead of being purely reactionary.


I think people give the US a lot of unnecessary shit. I don't think my government releases any zero days but I am sure they must have found some. Every government today probably uses zero days but it seems very few release information about them?


It's not about being held to a lower standard, it's about being held to a higher standard.


Simply because not enough anti-malware vendors are willing to let US government know that one of their favorite hoard of malware has lost "its edge".

So, either they form a department of viability or they lose it all.


While I don’t think we should be hoarding vulns, the idea of the government having huge budgets to find and disclose software defects is a bit strange to me. Seems like another instance of socializing bad externalities.


These are wins because if they're actually patched it takes offensive tools away from our adversaries.


the US often gets negative takes for doing what many other nations are also doing.

For example in 2018 Tencent (basically, China) withdrew from hacking competitions like pwn2own taking along with them the disclosures that proceeded.


Essentially every other industrialized nation.


Whataboutism. What China does doesn't invalidate any criticism the US gets. Or are you saying that actually it's perfectly fine to do this?


China hiding exploits it find has a large jmpace on Us policy. Should the Us reveal every zero day it knows, in a theoretical conflict with china China will have zero days US didnt know about but the US will have none.


Then the counterargument to "the US shouldn't be hiding exploits it knows" isn't "but China does it too", it's "actually the US should be doing exactly that because it's in its best interest".


Or there's some game theory and empirically tested amount of potential for violence (even cyber kind) that one should reserve because of human's tendency to be shitty when they can.


That would be true if it was just China, but when it's so many countries that it's essentially an international norm, the "whataboutism" charge loses some of its sting.


"But other countries do it too" is whataboutism, no matter how many other countries that is; it doesn't invalidate the criticism no matter the number. So I ask again: is the real counterargument that actually this is perfectly fine to do?


Whether something is acceptable depends very heavily on cultural norms. If lots of people do it without condemnation, that is compelling evidence there is a cultural norm which accepts it. That's a cromulent argument.


"Whataboutism" is pointing out other bad things to excuse or distract from the bad thing I'm doing.

Other countries doing the same thing is an argument about whether the thing is actually bad, and could be valid.


https://en.wikipedia.org/wiki/Whataboutism

For example, when someone accuses you of taking money out of the register, it would be an example of whataboutism to point that the person accusing you does it too. The X in "what about X?" doesn't need to be something fundamentally different, it just needs to be something that distracts from the original criticism.

If the real argument is that not disclosing vulnerabilities is not actually a bad thing, then the fact that China also doesn't do it is completely irrelevant.


> For example, when someone accuses you of taking money out of the register, it would be an example of whataboutism to point that the person accusing you does it too.

I don't think it would. Or at least, I don't think it necessarily would. "You do it, therefore it's not bad, therefore I'm not wrong" is a reasonable argument that is not distraction. "All these other people also do it and it is treated as de minimis at worst, and yet you're singling me out for inappropriate reasons" may be reasonable depending on context or may be somewhat abusive... but it is a different argument than "Whataboutism". Bringing up that you do it, too, purely (or mainly?) as a distraction can probably reasonably be accounted Whataboutism, but it's a distractingly noncentral example because of its proximity to those other arguments.

> If the real argument is that not disclosing vulnerabilities is not actually a bad thing, then the fact that China also doesn't do it is completely irrelevant.

Other agents having made the same decision is perhaps some evidence that it was a reasonable decision, although it very much depends on context.


Yes.


I guess there wont be one in 2024


NOBUS is a disaster. Knowingly leaving citizens unprotected is an absolute failure of government. Having a robust policy of identifying a resolving cybersecurity faults, and holding organizations accountable for patching and remediation is necessary if we are going to survive a real cyber “war”. We are absolutely unprepared.


This is a classic security dilemma that is not easily resolvable. Suppose we just look at the US and China. Each side will discover some number of vulnerabilities. Some of those vulnerabilities will be discovered by both countries, some just by one party. If the US discloses every vulnerability, we’re left with no offensive capability and our adversary will have all of the vulnerabilities not mutually discovered. Everyone disclosing and patching vulnerabilities sounds nice, but is an unrealistic scenario in a world with states that have competing strategic interests.


The US VEP, which is like 10 years old now, is explicit about the fact that zero-day exploits are not fundamentally "NOBUS". That term describes things like Dual EC, or hardware embedded vulnerabilities; things that are actually Hard for an adversary to take advantage of. Chrome chains don't count.


This presupposes that the purpose of government is to protect citizens.

The purpose of government is to take and maintain power and prevent any other organization from displacing them. It involves citizens only as a means to an end.

It would be a failure of government to place citizen safety over continuity of government.


Your first sentence seems like a distraction because the same criticism of NOBUS holds either way. Even if the leaders do not care about the citizens except as a means to an end, an oppressive government especially needs to maintain security because it needs to project force to deter threats to its power. If they have a known vulnerability, they should be even more worried that internal dissidents or an external foe will find it because they are even more dependent on power in the absence of democratic legitimacy.


It can be both at the same time. A government won't be great at protecting its citizens if it has no power over bad actors, both inside and outside.


> What changed the calculus in 2023 isn’t clear.

Well, the calculus didn't change in 2023 if the report was only released a month or so ago. And in fact, in May 2024:

DHS, CISA Announce Membership Changes to the Cyber Safety Review Board https://www.dhs.gov/archive/news/2024/05/06/dhs-cisa-announc...

So some new people came in and decided that more public information was better.

> On January 21, 2025, it was reported that the Trump administration fired all members of the CSRB.

Ah, well, never mind then


Yep. From the article:

> This lack of transparency could become a greater issue under the Trump administration, which has vowed to ramp up the government's cyber offensive operations, suggesting that the government demand for zero-day vulnerabilities may increase over the next four years. If this occurs, the government’s previous statements that the VEP favors disclosure and defense over withholding and offense may no longer be true. ...

> “The VEP and that number of 90 percent was one of the few places where the president and the White House could set the dial on how much they liked defense vs offense,” says Jason Healey, senior research scholar at Columbia University’s School of International and Public Affairs and former senior cybersecurity strategist for CISA. “[The Trump administration] could say we’re disclosing too [many vulnerabilities]. If the default [in the past] was to disclose unless there is a reason to keep, I could easily imagine the default is going to be to keep unless there is a reason to disclose.”


Were these the same people that declared the 2020 election as being the safest ever? I can't imagine why that would grate Trump


[flagged]


Easy, just hire people who constantly say “thank you for your leadership sir.”

I often talk about encouraging a truthseeking culture at work, because people naturally tend to gravitate toward easy or popular ideas.

But this is more like a lie-seeking culture.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: