Hacker News new | past | comments | ask | show | jobs | submit login
CISA boss: Makers of insecure software are the real cyber villains (theregister.com)
137 points by tsujamin 21 days ago | hide | past | favorite | 151 comments



At this point, I have to wonder what is even the point of missives like this. There are only two things that will solve the software quality problem:

1. Economic incentives. It's all just mindless blather unless you're actually talking about ways that software vendors will be held liable for bugs in their products. If you're not talking about that, what you're saying is basically "ok pretty please" useless.

2. Reducing the complexity of making products secure in the first place. Making truly secure software products is incredibly hard in this day and age, which is one reason why demanding software product liability is so scary. Professional structural engineers, for example, are used to taking liability for their designs and buildings. But with software security the complexity is nearly infinitely higher, and making it secure is much harder to guarantee.

The other thing that people often ignore, or at least don't want to admit, is that the "move fast and break things" ethos has been phenomenally successful from a business perspective. The US software industry grew exponentially faster than anyplace else in the world, even places like India that doubled down on things like the "Software Capability Maturity Model" in the early 00s, and honestly have little to show for it.


> At this point, I have to wonder what is even the point of missives like this. ...It's all just mindless blather unless you're actually talking about ways that software vendors will be held liable for bugs in their products.

I think that liability for bugs is exactly where she's going with this. I'm not an expert, but it sounds from a few things I've heard on some Lawfare podcasts (e.g,. [1][2]) like the idea of software liability has been discussed for quite a while now in government policy circles. This sort of public statement may be laying the groundwork for building the political will to make it happen.

[1] https://www.youtube.com/watch?v=9UneL5-Q98E&pp=ygUQbGF3ZmFyZ...

[2] https://www.youtube.com/watch?v=zyNft-IZm_A&pp=ygUQbGF3ZmFyZ...

EDIT:

> Making truly secure software products is incredibly hard in this day and age, which is one reason why demanding software product liability is so scary.

Loads of companies already are liable for bugs software that runs on their products: this includes cars, airplanes, and I would presume medical devices, and so on. The response has been what's called "safety certification": as an industry, you define a process which, if followed, you can in court say "we were reasonably careful", and then you hire an evaluator to confirm that you have followed that process.

These processes don't prevent all bugs, naturally, but they certainly go a long way towards reducing them. Liability for companies who don't follow appropriate standard processes would essentially prevent cloud companies cutting security to get an edge in time-to-market or cost.


> The response has been what's called "safety certification":

This is the most scary part for me. Certifications are mostly bureaucratic sugar and on the other hand very expensive. This seems like a sure way to strangle your startup culture.

If customers require certifications worth millions, nobody can bootstrap a small business without outside capital.


Assuming the level of certification will be proportionate the potential risk/harm, then this is actually totally ok. Like would you want to fly in a plane built but a bootstrap startup that had no certifications? Or go in a submarine to extremely deep ocean tours of the titanic? Or put in a heart device? Or transfer all of your savings to a startup's financial software that had no proof of being resilient to even the most basic of attacks?

For me, its a hard no. The idea of risk/harm based certification and liability is overdue.


Problem is that it's rarely proportional.

There's a different thread on HN about the UK Foundations essay. It gives the example of the builders of a nuclear reactor being required to install hundreds of underwater megaphones to scare away fish that might otherwise be sucked into the reactor and, um, cooked. Yet cooking fish is clearly normal behavior that the government doesn't try to restrict otherwise.

This type of thing crops up all over the place where government certification gets involved. Not at first, but the ratchet only moves in one direction. After enough decades have passed you end up with silliness like that, nobody empowered to stop it and a stagnating or sliding economy.

> Like would you want to fly in a plane built but a bootstrap startup that had no certifications?

If plenty of other people had flown in it without problems, sure? How do you think commercial aviation got started? Plane makers were startups once. But comparing software and planes (or buildings) is a false equivalence. The vast majority of all software doesn't hurt anyone if it has bugs or even if it gets hacked. It's annoying, and potentially you lose money, but nobody dies.


Commercial aviation was regulated because planes were killing people, and when it came in, air travel became the safest form of transportation. That isn't a coincidence. If the vast majority of software doesn't hurt anyone if it has bugs, then it won't require any certifications. If you heard me arguing for that, then you heard wrong. I am advocating for risk/harm based certification/liability.


Aren't you arguing for the status quo then? There are very small amounts of software that can cause physical harm, and those are already regulated (medical devices etc).


Financial harm and harm via personal records being hacked should also be included. The Equifax leak for example should have resulted in much worse consequences for the executives and also new software compliance regulations to better safeguard that sort of record-keeping.


Why aren't they installing grates on the intakes?


There will be grates but fish are small and obviously grates must have holes in them.


> The response has been what's called "safety certification": as an industry, you define a process which, if followed, you can in court say "we were reasonably careful", and then you hire an evaluator to confirm that you have followed that process.

Can you still call it "liability" when all you have to do is performing some kind of compliance theater to get rid of it?


Technically yes, since it depends on the kind of liability.

In particular, liability for negligence depends on somebody doing stupid-things or stupidly-not-doing the right things, and there's an ongoing struggle to define where that envelope lies.


The ‘compliance theater’ often is filled with things which seem inane to your product, but the requirements themselves are often broad so that they cover many different types of products. Sure, we could add further granularity, but there is usually a ton of pushback and many people need to come to some compromise on what is relevant.


Ask Boeing.


A third option is to empower security researchers and hope the good guys find the security holes before the bad guys.

Currently, we threaten the good guys, security researchers, with jail way to quickly. If someone presses F12 and finds a bunch of SSNs in the raw HTML of the State's web page the Governor personally threatens to send them to jail[0]. The good security researchers tip-tow around, timidly asking permission to run pentests while the bad guys do whatever they want.

Protect security researchers, change the laws to empower them and give them the benefit of the doubt.

I think a big reason we don't do this is it would be a burden and an embarrassment to wealthy companies. It's literally a matter of national security and we current sacrifice national security for the convenience of companies.

As you say, security is hard, and putting liability for security issues on anyone is probably unreasonable. The problem is companies can have their cake and eat it too. The companies get full control over their software, and they get to pick and choose who tests the security of their systems, while at the same time having no liability. The companies are basically saying "trust me to take care of the security, also, I accept no liability"; that doesn't inspire confidence. If the liability is too much to bear, then the companies should give up control over who can test the security of their systems.

[0]: https://techcrunch.com/2021/10/15/f12-isnt-hacking-missouri-...


It suggest that insecure software should be simply called defective product. So the security audit should be called QA.

A product, which don't spend a lot on QA, profit more. Unless there will be a catastrophic incident.

Also, why haven't those so called security researchers jointly criticized EDR yet? A third-party closed source kernel driver which behave like, practically same as malware.


Software has tons of bugs. A fraction of those bugs are security vulnerabilities.

Any type of bug can be considered a defect, and thus can be considered to make the product defective. By using the terminology "defective" instead of "vulnerable" we lose the distinction between security bugs and other bugs. I don't think we want to lose that distinction.


Security-related product defect, or simply security defect.


Empathic agreement.

Source: Was QA/test manager for a bit. Also, recovering election integrity activist.

TIL: The conversation is just easier when bugs, security holes, fraud, abuse, chicanery, etc are treated as kinds of defects.

Phrases like "fraud" and "exploit" are insta-stop conversation killers. Politicians and managers (director level and above) simply can't hear or see or reason about those kinds of things. (I can't even guess why not. CYA? Somebody else's problem? Hear no evil...?)

QA/Test rarely receives its needed attention and resources. Now less than ever. But advocating for "fixing bugs" isn't a total bust.


> Also, why haven't those so called security researchers jointly criticized EDR yet? A third-party closed source kernel driver which behave like, practically same as malware.

They have been, for years. Some very prominent voices in the security community have been criticizing the level of engineering prowess in security tools for ages - Travis Ormandy ripped into the AV industry for Project Zero over a decade ago and found critical problems in things like FireEye, too. The problem is that without penalties, the companies just keep repeating the cycle of “trust us” without improving their levels of craft or transparency.


In terms of shaming bad products, motivating executives, etc, maybe... However I don't think you can easily combine general QA and security stuff under one roof, because they demand different approaches and knowledge sets.


> But with software security the complexity is nearly infinitely higher, and making it secure is much harder to guarantee.

The other thing is that software, especially software connected software (which these days is most software) has to have a much higher level of security than most other industries. When a structural engineer builds a bridge they don't have to worry abou a large number of criminals from all over the world trying to find weaknesses that can be exploited to cause the bridge to collapse.

But software engineers do have to worry about hackers, including state sponsored ones, constantly trying to find and exploit weaknesses in their software.

I think it absurd to put blame on software engineers for failing to make flawless software instead of the bad actors that exploit bugs that would never be noticed during normal operation, or the law enforcement that is ineffective at stopping such exploitation.

Now, I do think that there should be more liability if you don't take security seriously. But there is a delicate balance there. If a single simple mistake has the potential to cause devastating liability, that will seriously slow down the software industry and substantially increase the cost.


Lock makers aren't liable for making pickable locks. Punish the bad actors and leave the liability to civil suits.


If a single simple mistake has the potential to cause devastating harm, then that is precisely the standard that should be demanded. If you do not want that, then you can work on something where your mistakes do not cause immense harm to others or reduce the scope of the system so such harm is not possible.

Imagine a mechanical engineer making industrial robots complaining about how unfair it is that they are held liable for single simple mistakes like pulping people. What injustice! If you want to make single simple mistakes, you can work on tasks where that is unlikely to cause harm like, I don't know, designing door handles? Nothing wrong with making door handles, the stakes are just different.

"But I want to work on problems that can kill people (or other devastating harm), but not be responsible for killing them (or other devastating harm)" is a utterly insane position that has taken hold in software and has no place in serious, critical systems. If you can not make systems fit for the level of harm they can cause, then you have no place making such systems. That is irrespective of whether anybody in the entire world can do so; systems inadequate to minimize harm to the degree necessary to provide a positive societal cost-benefit over their lifetime are unfit for usage.


>Imagine a mechanical engineer making industrial robots complaining about how unfair it is that they are held liable for single simple mistakes like pulping people. What injustice! If you want to make single simple mistakes, you can work on tasks where that is unlikely to cause harm like, I don't know, designing door handles? Nothing wrong with making door handles, the stakes are just different.

This is a truly absurd comparison. In the first place, yes, it is in fact much easier to make physical products safer, as anyone who's ever seen a warning line or safety guard could tell you. The manufacturers of CNC mills don't accept liability by making it impossible to run a bad batch that could kill someone, they just put the whole machine in a steel box and call it a day. The software consumers want has no equivalent solution. What's worse, in the second place, these engineers aren't actually held responsible for the equivalent of most software breaches already. There pretty much is zero liability for tampering or misuse, thus the instruction booklet of 70% legal warnings that still comes with everything you buy even in this age of cutting costs. Arguing software should be held to the same standard as physical products, when software has no equivalent of safety lockouts, is just to argue it should include acceptable use sections in its terms and conditions, which is no real improvement to people's security in the face of malicious actors.


Do you think that the mechanical engineer should be held liable if, say a criminal breaks into the factory, and sabatoges the robot to damage property or injure workers, because they didn't make the robot sufficiently resistent to sabatoge attempts?


Is that something that you expect to be attempted against a significant fraction of the products over the average expected lifetime of the product? If so, they should absolutely be required to make a product fit for its usage context that is robust against common and standard failure modes, like adversarial attacks.

That such requirements are relatively uncommon in physical products due to their usage contexts being fundamentally airgapped from global adversaries is a blessing. Their requirements are lower and thus their expected standards can be lower as well. What a concept!

You seem to think that standards are some kind of absolute line held in common across all product categories. Therefore it is unfair to hold some products to higher standards. That is nonsense. Expected standards are in relation to the problem and usage context and should minimize harm to the degree necessary to result in positive societal cost-benefit over their lifetime.

That is how literally everything else works and principle that can be applied in general. “But my problems are different, can I ignore them and inflict devastating harm on other humans” is asinine. If you are causing net harm to society, you should not get to do it. Period.


Ok let's try another analogy. Should a carmaker be held liable if someone breaks into a car and steals something valuable (laptop, purse, etc.)? Or how about if someone drives irresponsibly (possibly under the influence) and injures or kills someone in an accident?

Those are things that a significant number of cars will encounter.

And yes if you don't put any effort into making your car secure or safe, there should be liability, but if you put in significant effort to security it is unreasonable to make you liable because there is more you could have done, because there is always more you could have done, and there are significant tradeoffs between security and safety and useability and cost.

Maybe you could build a car that is impossible to break into, but it would be a huge pain to use, and most people wouldn't be able to afford it.

> inflict devastating harm on other humans

I'm not sure where you are getting this "devastating harm" from.

The context of this is the large number of "urgent" security patches and vulnerability disclosures.

The thing is, the vast majority of those are probably not as bad as the terminology might lead you a layperson to believe. Often in order to exploit a vulnerability in the real world, you have to chain multiple vulnerabilities together.

A lot of "critical" vulnerabilities in software I need to patch, I read through the description and determine that it would be extremely difficult, or impossible to exploit the way I use the software, but the patch is still necessary because some (probably small) set of users might be using it in a way that could be exploited.


You said: “If a single simple mistake has the potential to cause devastating liability”

Liability is in proportion to actual harm. A single simple mistake can only incur devastating liability if it causes devastating harm (as in harm actually occurred). If it can not and does not cause devastating harm, then you will not incur devastating liability and you are already protected.

Again, you are applying some sort of bizarre uniform standard for defect remediation that ignores all nuance around intended use, intended users, usage context, harm, potential for harm, etc. That is not how anything works. Toys have different standards than planes which have different standards than bridges. The world is not unary.

As for the questions of cars you posed. Those fall into slightly more nuanced principles.

One of those principles is that liability is in proportion to/limited by guarantees either explicit or implicit. The limitations of the physical security of car windows are clearly communicated, obvious, and well-understood to the average consumer. A company guaranteeing that their windows are immune to being broken would be taking on the liability for their guarantee that is in excess of normal expectations.

Software security and reliability is not just not clearly communicated, it is deceptively communicated. Do you believe that the average Crowdstrike customer believed that Crowdstrike could take out their business? Or were Crowdstrike’s sales and marketing continuously assuring their customers that Crowdstrike will work flawlessly despite the fine print disclaiming all liability? If the customer actually believed it could or would fuck up royally, then sure, no liability. Customer’s fault for using a product they believed to be a piece of shit. But would Crowdstrike have ever sold a single seat if their customer’s believed that? Fuck no. They made a implicit and likely explicit guarantee of operability and they are liable for it.

Liability around dangerous operation is again, about clear understanding of operation guarantees. It is obvious to the most casual observer the dangers of a chainsaw and how one might be safely used. It is the job of the manufacturer to make such boundaries clear, to mitigate unknowing unsafe operation, and to mitigate unsafe operation where feasible. To go back to Crowdstrike, you can not contend that it was used “dangerously”. The defect was in failure to perform correctly, not inappropriate usage. Dangerous usage might be enabling a feature that automatically deletes any program that starts sending on the network, but you forgot that your critical server starts sending notifications to your team when under heavy load. That is likely your damages to bear. But if it was actually a feature that deletes any “anomalous program”, then that depends on the guarantees provided.

Making careful tradeoffs about the distribution of damages and liability is a well-worn path. Throwing your hands up in the air because your problems are harder and more dangerous so you should be held to lower standards is absurd.

And please, you make it sound like software is built to some high standard comparable to your average physical product and that it is unfair to hold it to higher standards. You and I both know software is generally held to exactly no standard. Demanding it be held to even normal standards is not some sort of unfair imposition. It should probably be held to higher standards due to the risks of correlated failures due to network connectivity as you point out, but even just getting to normal standards would be a massive and qualitative improvement.


> You said: “If a single simple mistake has the potential to cause devastating liability”

I think you are misinterpreting my statement. I meant devastating to the software creator. Which is not necessarily the same thing as devastating to the victim of an attack.

> A single simple mistake can only incur devastating liability if it causes devastating harm (as in harm actually occurred)

This is absolutely not true. Even if you ignore any possible inequities that exist in the justice system (like say that civil courts tend to favor the party that spends more on lawyers), the same absolute amount in dollars has a very different impact on different parties.

For example suppose a single developer or early bootstrapped startup makes a library or application (possibly open source), and a big bank uses that library then a vulnerability in it is used as part of an attack that causes a few million in damages. That few million would be a relatively small amount to the big bank, but would likely bankrupt an individual or small startup, and would thus be "devastating".

> Again, you are applying some sort of bizarre uniform standard for defect remediation that ignores all nuance around intended use, intended users, usage context, harm, potential for harm, etc.

I don't know why you think that.

> One of those principles is that liability is in proportion to/limited by guarantees either explicit or implicit. The limitations of the physical security of car windows are clearly communicated, obvious, and well-understood to the average consumer. A company guaranteeing that their windows are immune to being broken would be taking on the liability for their guarantee that is in excess of normal expectations.

Right. And most software doesn't come with any garantee that it is bug free, or even free of security bugs. Afaik, software doesn't currently have some kind of exemption from liability. But it is very common to push as much liability on the user of the software as possible with terms of service or licensing terms. In fact almost all open source licenses have disclaimers saying that the copyright holder makes no garantees about the quality of the software.

> Software security and reliability is not just not clearly communicated, it is deceptively communicated.

That's a broad generalization, but I agree that there are many cases where that is true. But if what sales and marketing says differs from the actual terms of the contract or ToS, wouldn't that be more a problem of false advertising?

And security is an imprecise and relative term.

And there are cases where software contracts do have garantees of reliability, and sometimes security, and promise to compensate damages if those garantees aren't met. But you usually have to be willing to pay a lot of money for an enterprise contract to get that.

Could this situation be improved? Absolutely!

I think if your ToS push liability onto the custemer, at the very least that should be made much more clear to the customer (and the same for many other things hidden in the fine print), and then maybe market forces would push more companies to make stronger garantees to win customers.

But that problem is hardly unique to software. Lots of companies hide stuff like "you take full responsibility, and agree not to sue us" in their fine print.

> Do you believe that the average Crowdstrike customer believed that Crowdstrike could take out their business?

The crowdstrike issue is unrelated. As you admiited later, there was no malicious actor involved. And there were significant problems with their deployment process. I absolutely do think they should be held liable.

But that is very different from something like, say, a buffer overflow due to a simple mistake that slipped through rigorous review, testing, fuzzing and maybe even a pen test.

> you can not contend that it was used “dangerously”.

I don't.

> Throwing your hands up in the air because your problems are harder and more dangerous so you should be held to lower standards is absurd.

That isn't at all what I am saying. I'm saying that developers shouldn't be held responsible for the actions of criminals just because those criminals used an unknown weakness of the software to commit the crime. Doing so is holding software up to a higher standard than most other products. Now just as with physical products, I think there are exceptions, like if your product is sold or marketed specifically to prevent specific actions of criminals, and fails to do so.

Ironically, blaming developers cybercrime is throwing up your hands because your problems are hard. Specifically, stopping cybercrime with law enforcement is extremely difficult, in part because the criminals are often beyond the jurisdiction the applicable law enforcement.

But maybe putting some government funding towards increasing security of critical components, especially open source ones, or initiatives to rewrite those components in memory safe languages, would be more effective than pointing fingers?

> You and I both know software is generally held to exactly no standard.

Um, just like physical products the standards for software varies widely. What I am opposed to is applying some kind of blanket liability to all software products. Because a game shouldn't be held to the same standard as an enterprise firewall, or anti-malware solution.

And there absolutely are standards and certifications for security and reliability in software. ISO 27001, SOC 2, PCI, FedRAMP, HIPAA, just to name a few. And if you sell to certain organizations, like governments, financial institutions, health care providers, etc. you will need one or more of those.


Moving goalposts posts, if the person was injured by something the mechanical engineer has made themselves, contrary to software industry, they are indeed liable, and can even be the target of a lawsuit.

And since a mechanical engineer is a proper engineer, not someone that call themselves engineer after a six weeks bootcamp, additionally they can lose the professional title.


The goalpost isn't moving. Comparisons to physical products are wrong because the question is not "are there safety critical bugs" but "can this product survive a sustained assault by highly trained adversaries", which is a standard no other product is held to.


Silly me thinking there is something like product recalls, food poisoning, and use cases that insurances don't cover due to high liability in hazardous goods.

Anything that brings proper engineering practices into computing, and liability for malpractices, gets my vote.


Software bugs almost never poison you, so that isn't applicable.

Product recalls happen every day in the software industry, voluntarily. We call them software updates and the industry does them far better and more smoothly than any other industry.

Use cases that aren't covered due to liability can be found in any EULA. You aren't allowed to use Windows in nuclear power stations for example.


Actually they do, when faulty software powers something food related, anywhere on the delivery chain from the farmer to the point someone actually eats something.

EULAs are meaningless in many countries outside US law, most folks are yet to bother to sue companies because they have been badly educated to accept faulty software, while they on material world, that is something that they only accept on cheap knockoffs on street bazaar, and 1 euro shops.

Maybe that is the measure for software then.

And even those do require a permit for selling, in many countries, or some exchange of goods between sellers and law enforcement checking permits.


Is a non-network connected car model at risk of all cars being taken over by a economical sustained assault by highly trained adversaries in a foreign country? No.

Is a network connected car vulnerable to such attacks? Yes.

The product decision to make the car network connected introduced a new, previously non-existent risk. If you want to introduce that new risk, you (and anybody else who wants to introduce such risk) are responsible for the consequences. You can just not do it if that scares you.

“But other people get to profit off of endangering society and I want to do it too but I do not want to be responsible for the consequences” is not a very compelling argument for letting you do that.

That you need to reach higher standards for new functionality you want to add is the entire point. It is how it works in every other industry. Demanding lower standards because you want to do things that are harder and more dangerous is ass backwards.


Ok, now consider that there is a software component, say an open source library, that has a vulnerability in it, and a car maker used that component in their network connected car, then a well funded foreign adversary exploited that vulnerability, among others, to do something nefarious. That software component wasn't specifically designed to be used in cars, but it wasn't specifically designed not to be either, it was a general purpose tool. Should the make of that component be held liable? They weren't the ones who decided to connect a car to the internet. They certainly weren't the ones who decided not to fully isolate the network-connected components from anything that could control the care.

But if software doesn't have a way to push liability onto the user, then you can bet the big auto maker will sue that developer in nebraska[1] into oblivion, and the world might be worse off because of it.

I don't think that you should be able to do something like sell network connected cars, without putting any effort into security. But at the same time, I think a blanket requirement to be liable for any security vulnerabilities in your software could have a lot of negative consequences.

[1]: https://xkcd.com/2347/


If it was an electronic component that would set the car on fire due to a short circuit, the maker of the electronic component would certainly be sued, and was the electronic component designed to be placed on car circuit boards in first place?


3. Legal incentives. When somebody dies or fails to receive critical care at a hospital because a Windows XP machined got owned, somebody should probably face something along the lines of criminal negligence.


Will Microsoft face liability if someone dies or fails to receive critical care because some infrastructure system auto-rebooted to apply an update and lost state/data relating to that patient's care?


That's the only thing that'll Microsoft chill with the reboots


Yes.


The person who should be fined etc is the person who said put the machine on the internet.


> Professional structural engineers, for example, are used to taking liability for their designs and buildings. But with software security the complexity is nearly infinitely higher, and making it secure is much harder to guarantee.

I'm not sure about your claim that structural engineering is less complex, but there's another (arguably much more significant) difference: structural safety is against an indifferent adversary (the weather, and physics); software security is against a malicious adversary. If someone with resources wants to take down a building (with exception for certain expensive military installations), no amount of structural engineering is going to stop them. Software that isn't vulnerable to cyberattacks should be compared to a bunker that isn't vulnerable to coordinated artillery strikes, not to your average building.


You can also choose to avoid complexity.

Often a shorter computer program that is easy to understand can do exactly what a more complicated program can. We can simplify interfaces between systems and ensure that their specifications are short, readable and implementable without allocation, buffers or other things that can be implemented incorrectly. We can ensure that program code is stored separately from data.

Now that LLMs are getting better we could probably have them go through all our code and introduce invariants, etc. to make sure it does exactly what it's supposed to, and if it can't then a human expert can intervene for the function for which the LLM can't find the proof.

I think hardware manufacturers could help too. More isolation, Harvard architectures etc. would be quite appealing.


> economic incentives

EU CRA incoming, https://ubuntu.com/blog/the-cyber-resilience-act-what-it-mea...

How will the US version differ?


Making vendors understand that the time of EULAS waving responsibility is coming to pass, and like in any other grown up industry, liability is coming.


If customers want their software vendors to take liability for software defects (including security vulnerabilities) then they can just negotiate that into licensing agreements. We don't need the federal government to get involved with new laws or regulations.


> ways that software vendors will be held liable for bugs in their products.

And if we do that then the state of software would grind to a halt, because there is no software that is completely bug-free.[1]

Like it or not, the market has spoken on what it considers acceptable risk for general software. Software where human lives are at risk is already highly regulated, which is why so few human lives are lost to bugs, compared to lives lost to other engineering defects.

[1] I feel we are already at a point where the market has, economically anyway, hit the point of diminishing returns for investment into reliability in software.


Do get laws to punish certain behavior that behavior must be considered as a bad thing in terms of being morally wrong.

So it's a first step to liability.


> Making truly secure software products is incredibly hard in this day and age

I politely disagree. Writing secure software is easier than ever.

For example, there are several mainstream and full-featured web frameworks that use managed virtual machine runtimes. Node.js and ASP.NET come to mind, but there are many other examples. These are largely immune to memory safety attacks and the like that plague older languages.

Most popular languages also have a wide variety of ORMs available that prevent SQL injection by default. Don't like heavyweight ORMs? No problem! There's like a dozen micro-ORMs like Dapper that do nothing to your SQL other than block injection vulnerabilities and eliminate the boilerplate.

Similarly, web templating frameworks like Razor pages prevent script injection by default.

Cloud app platforms, containerisation, or even just virtual machines make it trivial to provide hardware enforced isolation on a per-app basis instead of relying on the much weaker process isolation within an OS with shared app hosting.

TLS 1.3 has essentially eliminated cryptographic vulnerabilities in network protocols. You no longer have to "think" about this concern in normal circumstances. What's even better is that back end protocols have also uniformly adopted TLS 1.3. Even Microsoft Windows has started using for the wire protocol of Microsoft SQL Server and for the SMB file sharing protocol! Most modern queues, databases, and the like use at least TLS 1.2 or the equivalent. It's now safe to have SQL[1] and SMB shares[2] exposed to the Internet. Try telling that to someone in sec-ops twenty years ago!

Most modern PaaS platforms such as cloud-native databases have very fine-grained RBAC, built-in auditing, read-only modes, and other security features on by default. Developers are spoiled with features such as SAS tokens that can be used to trivially generate signed URLs with the absolute bare minimum access required.

Speaking of PaaS platforms like Azure App Service, these have completely eliminated the OS management aspect of security. Developers never again need to think about operating system security updates or OS-level configuration. Just deploy your code and go.

Etc...

You have to be deliberately making bad choices to end up writing insecure software in 2025. Purposefully staring at the smörgåsbord of secure options and saying: "I really don't care about security, I'm doing something else instead... just because."

I know that might be controversial, but seriously, the writing has been on the wall for nearly a decade now for large swaths of the IT industry.

If you're picking C or even C++ in 2025 for anything you're almost certainly making a mistake. Rust is available now even in the Linux kernel, and I hear the Windows kernel is not far behind. Don't like Rust? Use Go. Don't like Go? Use .NET 9. Seriously! It's open-source, supports AOT compilation, works on Linux just fine, and is within spitting distance of C++ for performance!

[1] https://learn.microsoft.com/en-us/azure/azure-sql/database/n...

[2] https://learn.microsoft.com/en-us/azure/storage/files/files-...


This is a strange and myopic understanding of "application security". You seem quite focused on vulnerabilities that could threaten underlying platforms or connected databases, but you're ignoring things like (off the top of my head) authentication, access control, SaaS integrations, supply chains, and user/admin configuration.

Sure, write secure software where nobody signs in or changes a setting, or connects to Google Drive, and you have no dependencies... Truly mythical stuff in 2024.


I wanted to avoid writing war & peace, but to be fair, some gaps remain at the highest levels of abstraction. For example, SPA apps are a security pit of doom because developers can easily forget that the code they're writing will be running on untrusted endpoints.

Similarly, high-level integration with third-party APIs via protocols such as OIDC have a lot of sharp edges and haven't yet settled on the equivalent of TLS 1.3 where the only possible configurations are all secure.


Still too narrow. Even I assumed "application security" when this isn't the point of the comments. We're talking about the gamut, from the very infrastructure AWS or VMWare is writing, mainframe OS, Kubernetes, to embedded and constrained systems like IoT, routers, security appliances, switches, medical devices, automobiles.

You don't just tell all those devs to throw them on Rust and a VM, whether it's 2024 or December 31, 1999.


It's important to remember that even `npgsql` can have issues (see https://github.com/npgsql/npgsql/security/advisories/GHSA-x9...).

In your world, would the developer of a piece of software exploted by a vulnerability such as this be liable?


The point is that secure software is easier to write, not that it's impossible to have security vulnerabilities.

Your specific example is a good one: interacting with Postgres is one of those things I said people choose despite it being riddled with security issues due to its age and choice of implementation language.

Postgres is written in C and uses a complicated and bespoke network protocol. This is the root cause of that vulnerability.

If Postgres was a modern RDBMS platform, it would use something like gRPC and there wouldn't be any need to hand-craft the code to perform binary encoding of its packet format.

The security issue stems from a choice to use a legacy protocol, which in turn stems from the use of an old system written in C.

Collectively, we need to start saying "no" to this legacy.

Meanwhile, I just saw a video clip of an auditorium full of Linux kernel developers berating the one guy trying to fix their security issues by switching to Rust saying that Rust will be a second class citizen for the foreseeable future.


> Your specific example is a good one: interacting with Postgres is one of those things I said people choose despite it being riddled with security issues due to its age and choice of implementation language.

Ah there is the issue: protocol level bugs are language independent; even memory safe languages have issues. One example in the .net sphere is f* which is used to verify programs. I recommend you look at what the concepts of protocol safety actually look like.

> The security issue stems from a choice to use a legacy protocol, which in turn stems from the use of an old system written in C.

This defect in particular occurs in the c# portion of the stack, not in postgres. This could have occurred in rust if similar programming practices were used.

> If Postgres was a modern RDBMS platform, it would use something like gRPC and there wouldn't be any need to hand-craft the code to perform binary encoding of its packet format.

There is no guarantee a borked client implementation would be defect free.

This is a much harder problem than I think you think it is. Without resorting to a very different paradigm for programming (which, frankly, I don't think you have exposure to based upon your comments) I'm not sure it can be accomplished without rendering most commercial software non-viable.

> Meanwhile, I just saw a video clip of an auditorium full of Linux kernel developers berating the one guy trying to fix their security issues by switching to Rust saying that Rust will be a second class citizen for the foreseeable future.

Yeah, I mean start your own OS in rust from scratch. There is a very real issue that RIIR isn't always an improvement. Rewriting a linux implementation from scratch in rust if it's a "must have right now" fix is probably better.


The counter to any argument is the lived experience of anyone that developed Internet-facing apps in the 90s.

Both PHP and ASP were riddled with security landmines. Developers had to be eternally vigilant, constantly making sure they were manually escaping HTML and JS safely. This is long before automatic and robust escaping such as provided by IHtmlString or modern JSON serializers.

Speaking of serialisation: I wrote several, by hand, because I had to. Believe me, XML was a welcome breath of fresh air because I no longer had to figure out security-critical quoting and escaping rules by trial and error.

I started in an era where there were export-grade cipher suites known to be compromised by the NSA and likely others.

I worked with SAML 1.0 which is one of the worst security protocols invented by man, outdone only by SAML 2.0. I was - again — forced to implement both, manually, because “those were the times”.

We are now spoiled for choice and choose poorly despite that.


> protocol level bugs are language independent; even memory safe languages have issues. [...] This defect in particular occurs in the c# portion of the stack, not in postgres. This could have occurred in rust if similar programming practices were used.

But it couldn't have occurred in Python, for example, and Swift also (says Wikipedia) doesn't allow integer overflow by default. So it's possible for languages to solve this safety problem as well, and some languages are safer than others by default.

C# apparently has a "checked" keyword [0] to enable overflow checking, which presumably would have prevented this as well. Java uses unsafe addition by default but, since version 8, has the "addExact" static method [1] which makes it inconvenient but at least possible to write safe code.

[0] https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...

[1] https://docs.oracle.com/en/java/javase/19/docs/api/java.base...


> C# apparently has a "checked" keyword [0] to enable overflow checking, which presumably would have prevented this as well. Java uses unsafe addition by default but, since version 8, has the "addExact" static method [1] which makes it inconvenient but at least possible to write safe code.

This is the point I'm making: verifying a program is separate from writing it. Constantly going back and saying "but-if we just X" is a distraction. Secure software is verified software, not bugfixed.

And that's the point a lot of people don't like to acknowledge about software. It's not enough to remediate defects: you must prevent them in the first place. And that requires program verification, which is a dramatically different problem than the one OC thinks they're solving.


That is why other industries have bill of materials and supplier chain validation.

Bill of materials is already a reality in high critical computing deployments.


Yes, but 0 days exist. Software BOM is really only remedial.


3. Regulations.

Yes I know the typical visitor to HN is deathly allergic to government regulations and regulatory bodies in general. That attitude in tech is how we got here.


I agree with her about blaming developers, not hackers. Though not to the point of liability for all developers, but maybe for a few specialist professionals who take on that responsibility and are paid appropriately for it.

Hackers are essentially a force of nature that will always exist and always be unstoppable by law enforcement because they can be in whatever country doesn't enforce such laws. You wouldn't blame the wind for destroying a bridge - it's up to the engineer to expect the wind and make it secure against that. Viewing them this way makes it clear that the people responsible for hacks are the developers in the same way developers are responsible for non-security bugs. Blame is only useful if you can actually use it to alter people's behavior - which you can't for international hackers, or the wind.

Banging this drum could be effective if it leads to a culture change. We already see criticism of developers of software that has obvious vulnerabilities all the time on HN, so there's already some sense that developers shouldn't be extremely negligent/incompetent around security. You can't guarantee security 100% of course, but you can have a general awareness that it's wrong to make the stupid decisions that developers keep making and are generally known to be bad practice.


Developers build insecure software in part because themselves and in part because of the decisions made by their managers up to the CEO.

So when you write "developers" we must read "software development companies".


Yes, that's what I meant too, sorry.


> I agree with her about blaming developers, not hackers.

They are clearly called "villains".

Wind isn't a person capable of controlling their actions. There is no intention to do harm. They aren't senseless animals either. Yes, it's developers' fault if a product isn't secure enough, but it's also not wrong to put blame on those actively exploiting that. Let's not stop blaming those who do wrong --- and that kind of hackers is doing wrong, not just the developers "making stupid decisions".

Those aren't mutually exclusive


> They are clearly called "villains".

As readers of the article know, they are not:

> The truth is: Technology vendors are the characters who are building problems" into their products, which then "open the doors for villains to attack their victims,"

She’s talking about companies, not individual developers, and she didn’t call them villains but rather creators of the problems actual villains exploit. The company focus is important: it’s always easy to find who committed a problematic line of code - say a kernel driver which doesn’t validate all 21 of its arguments properly - but the person who typed that in doesn’t work alone. The company sets their incentives, provides training (or not), and most importantly should be pairing the initial author of that code with reviewers, testers, and quality tools. When a company makes a $50 toaster, they don’t just ask the designer whether they think it’s safe, they actually test it in a variety of ways to get that UL certification, and we have far fewer fires than we had a hundred years ago.

One key to understanding this is to remember CISA’s scope and mission. They’re looking at a world where every week has new ransomware attacks shutting down important businesses, even things like hospitals, industrial espionage is on the rise and the industry has largely tried to stay in the cheaper reactive mode of shipping patches after problems are discovered rather than reducing the rate of creating them. This is fundamentally not a technical issue but an economic one and she’s trying to change the incentive structure to get out of the cycle which really isn’t working.


> put blame on those actively exploiting that

To some extent hackers are like the wind. They're a nebulous cloud of unidentifiable possible-people that you can't influence through shaming or laws or anything else. I think we should see them that way to make it clear that it's primarily the developer's responsibility.

Of course blame hackers when they're within reach too.


That is rich coming from a former NSA Tailored Access Operations agent. She had no problems paying companies to release insecure software, including some that have signed the "secure by design" pledge.


That is important context, but I still agree with what she's said in this article. It's also rich that Cisco especially -- a company known for hard-coding backdoors into their products for decades -- is "taking a pledge" to do better.


I agree that software should often get more tests to improve security.

I don't think supporting companies that sign a meaningless pledge improves anything and I question her motives in trying to shame people who use companies that have not signed this pledge.


I see it as the opposite: as ex Deputy Head of TAO Easterly is no retard.

And there's a difference between defective software that leads to vulns exploited by crime gangs and NOBUS backdoors that the good guys use to keep you safe. Sounds bullshit, right?

That's how far the public discourse on cyber has diverged from the reality, which is part of the issue. Easterly's push for renaming cyber actors and flaws is smart. Bad quality comes from mindset, attitude. And names are important, as programmers should know! :)

I would prefer it if she had a GitHub profile tho. Always cool if you do that.


So I'm aware she worked for the NSA but this is the first I'm hearing of her working for TAO.

I had thought she worked at the NSA's IAD (defensive side) pre-merge of the offensive and defensive sides.


I think NSA and CISA have different objectives.


Pretty sure I'm going to burn some karma on this one but what the hell.

To the best of my knowledge there is no evidence in over four decades of commercial software development that supports the assertion that software can be truly secure. So to my mind this suggests the primary villains are the individuals and organizations that have pushed software into increasingly sensitive areas of our lives and vital institutions.


As someone who thinks the industry should professionalize, I actually agree with you somewhat; we pushed too far, too fast, and into territory that software has no business being in.


What's the definition of truly secure?


The world economy is hemorrhaging over a trillion dollars USD annually to cybercrime, so however you choose to define it it clearly doesn't exist.


If you defined it as absence of runtime errors, it exists.


Even if that's true (massive if) it's a meaningless assertion. You've still got to deal with several tiers of hardware vulns, the OS space, and figure out a way to provably secure any network traffic generated. "hello world" tends to be pretty secure in theory but it isn't particularly useful.


It's not meaningless because relevant security issue arise from runtime errors. These can be completly fixed by using Ada/SPARK or Rust. Hardware vulns are for many systems not very important.


LOL The Rust Evangelism Taskforce has entered the chat


I don't use Rust. And you have no single argument.


points at the aforementioned forest fire of money You have a way to put that out all of the best funded companies on the planet and just about every sovereign government would be kicking your door in. That's my argument. Observed results trump hypotheticals.


That does not make sense.


I assure you it does.


I think sometimes cyber is still seen as an unnecessary cost. Plenty of places do bare minimum for security, and most of the time its after an incident that budgets suddenly get raised.

Software, hardware, policy, and employee training are all things one must focus on. You can't just start making rdx or fireworks without the proper paperwork, permits, licenses, fees, and a lawyer around to navigate everything. You run a business without investing anything into IT and cybersecurity, you just make it easier for an incident to occur. And remember, just because your product isn't IT or cyber security dosnt mean its losing money, it a cost of doing buissness in our regulated market. You mishandle HIPPA, PII or sensitive info, and the customers realize you didn't take basic steps to stop this, you open yourself to a lawsuit. Think about it like this, investing in it every day means your lowering that risk, however much you think is reasonable to pay for, and every day its paying for itself.


Dan Geer on prioritizing MTRR over MTBF (2022):

  Metrics as Policy Driver: Do we steer by Mean Time Between Failure (MTBF) or Mean Time To Repair (MTTR) in Cybersecurity?

  Choosing Mean Time Between Failure (MTBF) as the core driver of cybersecurity assumes that vulnerabilities are sparse, not dense.  If they are sparse, then the treasure spent finding them is well-spent so long as we are not deploying new vulnerabilities faster than we are eliminating old ones. If they are dense, then any treasure spent finding them is more than wasted; it is disinformation. 

  Suppose we cannot answer whether vulnerabilities are sparse or dense. In that case, a Mean Time To Repair (MTTR) of zero (instant recovery) is more consistent with planning for maximal damage scenarios. The lesson under these circumstances is that the paramount security engineering design goal becomes no silent failure – not no failure but no silent failure – one cannot mitigate what one does not recognize is happening.


I don't quite agree, but I do somewhat agree.

We need to professionalize and actually accept liability for our work. [1]

[1]: https://gavinhoward.com/2024/06/a-plan-for-professionalism/


Using the same logic, one can argue "SOFTWARE IS PROVIDED AS IS". It should be up-to the user to choose the correct software based on their security policy.

I write software for fun and skillz, making computers do extraordinary things. If I start following regulation, then there is no fun for me and no software that does extraordinary things.

No ma'am Doom or Second Reality would not have been possible with this attitude.


The users choice affects third parties, so it's not that simple.

I bet you won't recommend to install your software on essential systems.

>No ma'am Doom or Second Reality would not have been possible with this attitude.

Same is true for many kinds of malware.


> Same is true for many kinds of malware.

Would you prefer to not have Doom at all? (Over only some malware not existing)


Why does software require so many urgent patches?

Conspiracy theory: creating new bugs they can always fix later is a good source of continued employment.

Of course there's also the counterargument that insecurity is freedom: if it weren't for some insecurity, the population would be digitally enslaved even more by companies who prioritise their own interests. Stallman's infamous "Right to Read" is a good reminder of that dystopia. This also ties in with right-to-repair.

The optimum amount of cybercrime is nonzero.


One thing is software development is not really focused on secure software. If Microsoft ca n manage to call recall a engineered product feature where those M$ folks get payed stacks to work on, can't even both securing the data in rest, and then decide its a great idea to use cloud computing for recall, then I can totally see "security software engineer" becoming a separate field.

Securing software seems to be hard especially in web development. You got to worry about regular development, then all these crazy different exploit methods like xss,SQL injection, data sanitation, etc....and then you got to get this site working for multiple browsers, you need to jugle all of this. And if an api or some 3rd party tool gets compromised how do you prevent that except a crystal ball a bottle of burbon and a clairvoyant chant?

Also their was iirc people submitting bogus CVE reports to increase the mental drain on people and overwhelm the human link which is always the weakest.


It isn't malice dude. Unfortunately, it really is just that 70% of developers are utterly incompetent.


If having security vulnerabilities in code you wrote or reviewed is a sign of incompetence, then there has probably never been a competent developer in the history of the industry.


I wouldn’t say that. There are some not obvious things like timing attacks that you probably shouldn’t feel bad about.

If you’re still writing sql injections though, yeah, you’re terrible.


Conspiracy theory: creating new bugs they can always fix later is a good source of continued employment.

That is absolutely a thing [1]. There are hardware devices that can be fixed illegally fixed using 3rd party software for this reason. The people making a work around to the scam then get sued. The video is worth a watch in my opinion. It was created 3 years ago and the problem is still ongoing.

[1] - https://www.youtube.com/watch?v=SrDEtSlqJC4 [video][29min]


I used to be an IT guy at a structural and civil engineering firm. Those were real professional engineers with stamps and liability.

As long as "SWEs" do not have stamps and legal liability, they are not real (professional) engineers, IMHO.

My point is that I believe to earn the title of "Software Engineer," you should have a stamp and legal liability.

We done effed up. This breach of standards might be the great filter.

edit: Thanks to the conversation down-thread, the possibly obvious solution is a Software Professional Engineer, with a stamp. This means full-stack is actually full effing stack, not any bullshit. This means that ~1% to ~5% of SWE would be SWPE, as it is in other engineering domains. A SWPE would need to sign off on anything actually important. What is important? Well we figured that out in other engineering domains. It's time for software to catch the f up.


I actually looked into PE for software a while back. Are you aware that this was indeed a thing and was actually discontinued back in 2019 due to lack of participants? https://www.nspe.org/resources/pe-magazine/may-2018/ncees-en...


Ok, so overnight all programmers stop calling themselves Engineers. [1] What problem does that solve? I fix bugs all day, but I don't call myself a Software Doctor.

Frankly whether software people call themselves engineers or not matters to pretty much no-one (except actual engineers who have stamps and liabilities.)

Creating a bunch of requirements and liability won't suddenly result in more programmers getting certified and taking on liability. It'll just mean they stop using that title. I'm not sure that achieves anything useful. We'd still have the exact same software coming from the exact same people.

[1] for the record I think 'software engineer' is a daft title anyway, and I don't use it. I don't have an engineering degree. On the other hand I have a science degree and I don't go around calling myself a data scientist either.


That's fine, it just means that devs without stamps can't sign off on anything actually important. In real engineering, there is a difference between an Engineer and a Professional Engineer. The latter has a stamp.

I realize that this is the nearly the opposite of our current environment. It is a lot more regulation, a lot more professional standards... but it worked for civil and structural, and those standards were written in blood.

Maybe what I am asking for is a PE for SWE, those people have stamps, and it would be really hard to get a SW PE. Anything deemed critical (like security), by regulation, would require a SW PE stamp. [0]

Software did in-fact eat the world. Why shouldn't it have any legal/professional liability like civil and structural engineering?

[0] In this case, full stack, actually means full freaking stack = SWPE


>> That's fine, it just means that devs without stamps can't sign off on anything actually important

For some definition of important.

But let's follow your thought through. Who decides what is important? You? Me? The developer? Yhe end-user? Some regulatory body?

Is Tetris important? OpenSSL? Notepad ++? My side project on github?

If my OSS project becomes important, and I now need to find one of your expensive engineers to sign off on it, take liability for it, do you think I can afford her? How could they be remotely sure that the code is OK? How would they begin to determine if its safe or not?

>> Software did in-fact eat the world. Why shouldn't it have any legal/professional liability like civil and structural engineering?

Because those professions have shown us why that model doesn't scale. How many bridges, dams etc are built by engineers every year? How does that compare to the millions of software projects started every year?

In the last 30 years we've pretty much written all the code, on all the platforms, in use today. Linux, Windows, the web, phones, it's all less than 35 years old. What civil engineering projects have been completed in the same time scale? A handful of new skyscrapers?

You are basically suggesting we throw away all software ever written and rebuild the world based on individual's prepared to take on responsibility and legal liability for any bugs they create along the way?

I would suggest that not only would this be impossible, not only would it be meaningless, but it would take centuries to get to where we are right now. With just as many bugs as there are now. But, yay, we can bankrupt an engineer every time a bug is exploited.


This has all been done before in mechanical, structural, and civil engineering. People die and then regulatory and industry standards fix the problems.

We do not need to re-invent the concepts of train engine, bridge, and dam standards again.

I mean, I guess we actually do. The issue is that software has not yet killed enough people for those lessons to be learned. We are now at that cliff's edge [0], [1].

Another problem might be that software influence is on a far more hockey-stick-ish growth curve than what we dealt with in mechanical, civil, and structural engineering.

Meanwhile, our tolerance for professional and governmental standards seems to be diminishing.

[0] https://news.ycombinator.com/item?id=39918245

[1] https://news.ycombinator.com/item?id=24513820

... https://hn.algolia.com/?q=hospital+ransomware


No, the world's infrastructure has never been rebuilt from scratch to higher standards, not in the last few thousand years. We have always built on what already exists, grandfathered in anything that seemed ok, or was important enough even if not ok, etc.

We often live in buildings that far predate any building code, or even the state that emitted that code. We still use bits of infrastructure here and there that are much older than any modern state at all (though, to be fair, if a bridge has been around for the last thousand years, the risk it goes down tomorrow because it doesn't respect certain best practices is not exactly huge).


There have long been multiple forms of professional software engineering in aerospace, rail, medical instrumentation and national security industries such as ISO 26262, DO-178B/DO-178C/ED-12C, IEC-61508, EN-50128, FDA-1997-D-0029 and CC EAL/PP.

DO-178B DAL A (software whose failure would result in a plane crashing) was estimated at [1] to be writable at 3 SLOC/day for a newbie and 12 SLOC/day for a professional software engineer with experience writing code to this standard. Writing software to DO-178B standards was estimated in [1] to double project costs. DO-178C (newer standard from 2012) is much more onerous and costly.

I pick DO-178 deliberately because the level of engineering effort required in security terms is probably closest to that applied to seL4, which is stated to have cost ~USD$500/SLOC (adjusted for inflation to 2024).[2] This is a level higher than CC EAL7 as CC EAL7 only requires formal verification of design, not the actual implementation.[3] DO-178C goes as far as requiring every software tool used to verify software automatically has been formally verified otherwise one must rely upon manual (human) verification. Naturally, formally verified systems such as flight computer software and seL4 are deliberately quite small. Scaling of costs to much larger software projects would most likely be prohibitive as complexity of a code base and fault tree (all possible ways the software could fail) would obviously not scale in a friendly way.

[1] https://web.archive.org/web/20131030061433/http://www.euroco...

[2] https://en.wikipedia.org/wiki/L4_microkernel_family#High_ass...

[3] https://cgi.cse.unsw.edu.au/~cs9242/21/lectures/10a-sel4.pdf


With much humility, may I ask, have you been exposed to the world of PEs with stamps and liability?

Do you see the need for anything like this in the software world, in the future?


Professional engineers have been stamping and signing off on safety-critical software for decades, particularly in aviation, space, rail and medical instrumentation sectors. Whilst less likely to be regulated under a "professional association" scheme, there has also been two decades of similar stamping of security-critical software under the Common Criteria EAL scheme.

The question is whether formal software engineering practices (and associated costs) expand to other sectors in the future. I think yes, but at a very slow pace mainly due to high costs. CrowdStrike's buggy driver bricking Windows computers around the world is estimated to have caused worldwide damages of some USB$10bn+.[1] There will be cheaper ways seen to limit a buggy driver bricking Windows computers in the future other than requiring every Windows driver be built to seL4-like (~USD$500/SLOC) standards.

If formal software engineering practices are implemented more as years go by, it'll be the simplest/easiest software touched first, with the highest consequences of failure, such as Internet nameservers.

[1] https://en.wikipedia.org/wiki/2024_CrowdStrike_incident


As a header, there's clearly a liability problem in modern software, which I'll get to later.

[pre-posting comment: I've moved the semi-rant portion to the bottom, because I realized I should start with the more direct issues first, lest the ranty-ness cause you to not read the less ranty portion :D ] <snip and paste below> Now getting to the point: there is a real problem in that companies can advertise products to do a certain thing, and they can then have a license agreement that says "we're not liable if it fails to do what we said it would do", but generally despite those licenses (which to be clear are a requirement for open source to exist as a concept), the law has found companies are liable for unreasonable losses.

So the question is just how liable should a company be for a bug in their software (or hardware I guess depending on where you place the hardware vs firmware vs software lines) that can be exploited, and your position is that in addition to liability bought about by their own actions (Again despite the "we have no liability" EULAs plenty of companies have ended up with penalties for bugs in their software causing a variety of different awful outcomes.

But you're going a step further, you're now saying, in addition to liability for your errors, you're also liable for other people causing failures due to those errors, either accidentally or intentionally.

I am curious, and I would be interested in the responses from your Real Engineer coworkers.

Who is responsible if a bridge collapses when a ship crashes into it? An engineer can reasonably predict that that would happen, and should design to defend against it.

Let's say an engineer designs a building, and a person is able to bomb the building and cause it to collapse with only a small amount of fertilizer? What happens to the liability if the only reason the plot succeeded was because they were able to break past a security barrier?

Because here is the thing: we are not talking about liability if a product/project/construction fails to do what it says it will do (despite EULAs, companies generally lose in court when their product causes harm even if there's a license precluding liability). The question is who is liable if a product fails to stand up to a malicious actor.

At its heart, the problem we're discussing is not about liability for "the engine failed in normal use", it's "the engine failed after someone poured sugar into the gas tank", not "the wall collapsed in the wind" it's "the wall collapsed after someone intentionally drove their truck into it", not "the plane crashed when landing due to the tires bursting", it's "the plane crashed when landing as someone had slashed the tires".

What you're saying, is that a Professional Engineer signing off on a design is saying not only "this product will work as intended", they're saying "this product will even under active attack outside of its design window".

That's an argument that seems to go either way: I don't recall ever hearing about a lawsuit against door manufacturers due to burglars being able to break through the doors or locks, but on the other hand Kia is being sued due to the triviality of stealing their cars - and even then the liability claims seem to be due to the cost of handling the increased theft, not the damage from any individual theft.

[begin ranty portion: this is all I think fairly reasonable, but it's much more opinionated than the now initial point and I'm loathe to just discard it]

I'm curious what/who you think is signing off and what they are signing off on.

* First off, software is more complex than more or less any physical product, the only solution is to reduce that complexity down to something manageable to the same extent that, say, a car is. How many parts total are in your car? Cool that's how many expressions or statements your program can have. And because it's not governed by direct physical laws and similar interactions, then that's still more complex than a car.

* Second: no more open source code in commercial products - you can't use it in a product, because doing so requires sign off by your magical software engineers who can understand products more complex, again, than any single physical product

* Third: no more free development in what open source remains - signing off on a review now makes you legally liable for it. You might say that's great, I say that means no one is going to maintain anything for zero compensation and infinite liability.

* Fourth: no more learn development through open source contributions, as a variation of the above now every newbie that submits a change brings liability, so you're not accepting any changes from anyone you don't know, and who you don't have reason to believe is competent.

* Fifth: OSS licenses are out - they all explicitly state that there's no warrantee or fitness for purpose, but you've just said the engineer that signs off on them is liable for it, which necessarily means that your desire for liability trumps the license.

* Sixth: Free development tools are out - miscompilation is now a legal liability issue, so now a GCC bug opens whoever signed off on that code to liability.

The world you're describing is one in which the model of all software dev including free dev, now comes with liability that matches designing cars, or constructing buildings, both of which are much less complex and much more predictable than even the modest OSS projects, and those fields all come with significant cost based barriers to entry. The reason there are more software developers than there are civil or mechanical engineers is not because one is easier than the other, it's because you cannot learn civil or mechanical engineering (or most other engineering disciplines) as cheaply as software. The reason that software generally pays more than those positions is because the employer is taking on a bunch of the financial, legal, and insurance expenses required to do anything - the capital expenditure required to start a new civil or mechanical engineering companies is vastly higher than for software, and the basic overhead for just existing is higher, which means employers don't have to compete with employees deciding to start their own companies. A bunch of this is straight up capital costs, and capital, but the bulk of it is being able to have sufficient liability protection before even the smallest project is started. At that point, the company has insurance to cover the costs so the owners are generally fine, but the engineer that missed something - not the people forcing time crunches, short cuts, etc - is the one that will end up in jail. You've just said the same should apply to software: essentially the same as today company screws up and pays fine/settlement, but now with lowered pay rates for the developers, and they can go to jail.

All because you have decided to apply a liability model that is appropriate to an industry where the things that are signed off on have entirely self contained and mostly static behavior to a different industry where _by design_ the state changes constantly, so there is not, and cannot be, any equivalent "safety" model or system. Even in the industries that you're talking about, where analysis is overwhelmingly simpler and predictable, products and construction fails. Yet now you're saying the software development could just be the same as that. When developing a building, you can be paranoid in your design, and make it more expensive by overbuilding - civil engineering price competition is basically just a matter of "how much can I strip down the materials of construction" without it collapsing (noting that you can exactly model the entire behavior of anything in your design). Again, the software your new standards require are the same as that required by the space and airline industry that people routinely already berate for being "over priced".

You've made a point that there are engineers, and professional engineers, and the latter are the only ones who sign off on things. So it sounds like you're saying patches can only be reviewed by specific employees, who taken on liability for those changes, so now an OSS project has to employ a Professional Engineer to review contributions, who becomes liable for any errors they miss (out of curiosity, if a bug requires two different patches working together to create a vulnerability, which reviewer is now legally liable?). Those professional engineers now have to sign off on all software, so who is signing off on linux? You want to decode images, hopefully you can find someone to sign off on that. Actually it's probably best to have a smaller in-house product, or a commercial product from another company who have had their own professional engineers sign off, and who have sufficient insurance. Remember that you need to remind your employees not to contribute to any OSS projects, and don't release anything as OSS, because you would be liable if it goes wrong in a different product that you don't make money from now (remember your own Professional Engineers have signed off on the safety of the code you released, if they were wrong, you're now liable if someone downstream relied on that sign off).

This misses a few core details:

  Physical engineering is *not* software engineering (which yes as a title "software engineer" is not accurate in many/most cases as it does imply a degree of rigour absent in most software projects). Physical engineering does not employ the same degree of reuse and intermingling as occurs in software - the closest I can really think of is engine swaps in cars, but that's only really doable because the engine is essentially the most complex part of the car anyway (at least in an ICE), and even then the interaction with the rest of the car is extremely constrained, predictable, and can be physically minimized. For civil/structural engineering it's even more extreme: large construction (e.g. the complex cases) are not simply made by mashing together arbitrary parts of other projects - buildings fall into basically two categories: the exact same building with different dimensions and a different paint job, or entirely custom.

  Physical engineering has basically an entirely predictable state from which to work. The overwhelming majority of any physical design is completely static, the things that are dynamic have a functionally finite and directly modelable set of states and behaviors, and those states and behaviors vary in predictable ways in response to constrained options. Despite this, many (if not most, though quantifying this would be hard because there's also vastly more static engineering than dynamic) failures are in the dynamic parts of these projects than the static portions. Software is definitionally free of more or less any static behavior, the entire reason for software is the desire for constantly changing state.
A lot of failures in physical engineering are the result of failing to correctly model dynamic behavior, and again, software is entirely the part that Real Engineers have a tendency to model incorrectly.


I think in certain ways you are right we need those standards and stamps. But also colleges need to be very clear and cease calling it software engineering as it implies the expertise of a engineer.

Computer engineering degrees can sorta get away with this, as they have quite a bit of engineering classes they have to take, but I'm not sure if they can call themselves engineers legally.

Its hard because I believe certain titles are protected like doctor,lawyer,engineer, but last I check it varied by state in the USA at least.

Sidenote: sometimes people get a computer science degree, or they he a cyber security degree, or even cyber warefare degree. All three seem to be used interchangeably in the security field.

On one hand I get how formal protected titles help uphold standards and create trust, but it also enforces the grip colleges have and it creates more barriers to entering a field. Some States require 3 years experience to get a state permit for certain activities, when a federal permit is already needed so in that case its just more paperwork.

Imagine if we started demanding people who build houses, not the guy who designed it but the builders to be an engineer, so we can trust their work and ability to do a job in the correct manner?

At what point do we sacrifice and pass legislation in the name of reliability and safety?


> As long as "SWEs" do not have stamps and legal liability, they are not real (professional) engineers

When and if that happens I’ll move to carpentry. Good luck. Tech is already full of *it. The only thing short of making it even worse is stamps and a mafia-like org issuing the stamps and asking for contributions in return, like it happens in the fields of medical care, law, book keeping, architecture or civil engineering.

The companies should certify the products which require certification instead and get liability insurance.


Where does CISA/NIST recommend (for software developers) or require (for government agencies integrating software) specific software/operating system hardening controls?

* Where do they require software developers to provide and enforce seccomp-bfp rules to ensure software is sandboxed from making syscalls it doesn't need to? For example, where is the standard that says software should be restricted from using the 'ptrace' syscall on Linux if the software is not in the category of [debugging tool, reverse engineering tool, ...]?

* Where do they require government agencies using Kubernetes to use a "restricted" pod security standard? Or what configuration do they require or recommend for systemd units to sandbox services? Better yet, how much government funding is spent on sharing improved application hardening configuration upstream to open source projects that the government then relies upon (either directly or indirectly via their SaaS/PaaS suppliers)?

* Where do they provide a recommended Kconfig for compiling a Linux kernel with recommended hardening configuration applied?

* Where do they require reproducible software builds and what distributed ledger (or even central database) do they point people to for cryptographic checksums from multiple independent parties confirming they all reproduced the build exactly?

* Where do they require source code repositories being built to have 100% inspectable, explainable and reproducible data? As xz-utils showed, how would a software developer need to show that test images, test archives, magic constants and other binary data in a source code repository came to be and are not hiding something nefarious up the sleeve.

* Where do they require proprietary software suppliers to have source code repositories kept in escrow with another company/organisation which can reproduce software builds, making supply chain hacks harder to accomplish?

* ... (similar for SaaS, PaaS, proprietary software, Android, iOS, Windows, etc)

All that the Application Security and Development STIG Ver 6 Rel 1[1] and NIST SP 800-53 Rev 5[2] offer up is vague statements of "Application hardening should be considered" which results in approximately nothing being done.

[1] https://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_ASD_...

[2] https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.S...


I don't know how meaningful those countermeasures really are. Like, you're basically looking at the space of Linux kernel LPEs, and you're bringing it down to maybe 1/3rd the "native" rate of vulnerabilities --- but how does that change your planning and the promises you can make to customers?


Government agencies have got their hands tied just getting people to use MFA and not click on phishing emails. You're decades ahead of the herd if you're thinking about coding in SECCOMP-BPF.


Software sandboxing has a relatively good cost-to-benefit ratio at reducing the consequences of software bugs, which is why it's already implemented in a lot of software we all use every day. For example, it exists in Android apps, iOS apps, Flatpak apps (Linux), Firefox[1][2], Chromium browsers[3][4][5], SELinux-enabled distributions such as Fedora and Hardened Gentoo[6], OpenSSH (Linux)[7], postfix's multi-process architecture with use of ACLs, etc.

Kubernetes, Docker and systemd folk will be familiar with the idea of sandboxing for containers too, and they're able to do so using much higher level controls, e.g. turn on Kubernetes "Restricted pod security standard" for much stricter sandboxing defaults. Even if containerisation and daemon sandboxing aren't used, architects will understand the concept of sandboxing by just specifying more servers, each one ideally performing a separate job with the number of external interfaces minimised as much as possible. In all of these situations, the use of more granular controls such as detailed seccomp-bpf filters is most useful to mitigate the risks introduced by (ironically) security agent software that is typically installed alongside a database server daemon, web server daemon, etc within the same container.

Tweaking some Kubernetes, Docker or systemd config is _much_ cheaper and quicker to implement rather than waiting to rewrite software in a safer language such as Rust (a noble end goal). Even if software were rewritten in Rust, you'd _still_ want to implement some form of external sandboxing (e.g. systemd-nspawn applying seccomp-bpf filters based on some basic systemd service configuration) to mitigate supply chain attacks against Rust software which cause the software to perform functions it shouldn't be doing.

[1] Firefox Linux: https://searchfox.org/mozilla-central/source/security/sandbo...

[2] Firefox Windows: https://searchfox.org/mozilla-central/source/security/sandbo...

[3] Chromium multi-platform: https://github.com/chromium/chromium/blob/main/docs/security...

[4] Chromium Linux: https://chromium.googlesource.com/chromium/src/+/0e94f26e8/d... (seemingly Linux sandboxing is experiencing significant changes as this document or one similar to it does not appear to exist anymore)

[5] Chromium Windows: https://chromium.googlesource.com/chromium/src/+/HEAD/docs/d...

[6] https://gitweb.gentoo.org/proj/hardened-refpolicy.git/tree/p...

[7] https://github.com/openssh/openssh-portable/blob/master/sand...


I dunno. "Evil Ferret" and "Scrawny Nuisance" sound pretty good in our irony filled world.


So in a hypothetical world were the paranoid reign supreme and all software ia safe and unuseable cause usage is not a protection goal, do they declare a revolution in the name of useability and economic speed to overthrow the evil protectors?


I hope she starts the crackdown with easily the biggest impact offender here, Microsoft


Was about half way through before I realised that the article was not, in fact, satirical. Half-expected to see harddrive in the URL.


What a joke. This role deserves way better, but I understand it's only been around since 2018.


This seems like a very pat dismissal of what upon reading further seems like a very reasonable critique of the industry. Vendors have been seriously exploiting their ability to decline responsibility for their product development decisions and that often has significant negatives for affected users.


I think we've all seen bad software. I agree it's a problem. What I question is where the blame is being placed.

> Naturally, if writing flawless code was super easy, it would be done without fail. Some developers are clearly careless or clueless, leading to vulnerabilities and other bugs, and sometimes skilled humans with the best intentions simply make mistakes. In any case, Easterly isn't happy with the current defect rate.

To be fair this is the author of the article saying this. However just about all the examples of guideline adherence are about product features, not bugs.

The software quality problem squarely lies with incompetent management and idiots with checklists. That this role is currently being filled by someone with a military background doesn't help at all.


I think that’s a good example of the problem with The Register, but I think we should try to focus more on Easterly’s quoted statements which accurately identify the problem as corporate.


I strongly disagree.

If someone puts cyanide in the coffee pot, we don't blame the engineer that designed the coffee pot for not making it cyanide proof.

Criminals are the criminals, not a developer that didn't code defensively enough. The fact that a government official is blaming developers for crimes they don't commit is fascist level rhetoric.


We did blame KIA for their shitty car locks.

And your example is a bad one, because you would blame the engineer if sells it as cyanide proof.

Software vendors claim their software is safe that only high sophisticated criminals could break their software.

In reality it's often just script kiddies.


In the comment you responded to, they made no mention of the engineer expecting or claiming it is cyanide proof, so your response makes no sense.

Is the person in the article saying only people who claim that software is free of defects should be held liable?

I would never make a claim that software I've written is 100% secure, and I would stop writing software if I were held criminally liable for defects.


No member of the US government proposed locking up mechanics that build KIAs for the crime of theft of motor vehicles. (And if I am wrong, and they did, then I would be against that as well)

An official of the US government is saying they want software developers to be treated as criminals if their software can be exploited. That is an insane proposal.


A previous head of cyber security was fired when he said something like that.


You won't earn a prize for most secure, fewest bugs or longest uptime in our industry.

Days without incident is not a metric the software industry cares about, because it doesn't matter.

Our customers are vendor-locked in and because we have a market monopoly they can't do anything than accepting our conditions.

If only the state would regulate our industry, but that won't happen, because we will call the regulator a communist and then every regulation will be deleted from the agenda.


>"We don't have a cyber security problem – we have a software quality problem. We don't need more security products – we need more secure products."

Uhmmm. The foundation of a lot of the modern economy is built on Windows and the Crowdstrike fiasco has shown, Windows requires a security software to save it from itself by running at the kernel level. If we truly want secure products, we should shutdown all Windows machines?


Crowdstrike also provided software for Linux systems. It's something you install to satisfy auditors and they would demand the same of any OS unless such functionality was built-in.


But wait... most Windows runs on Intel, we should shutdown Intel.

Of course shutting down Windows will have zero effect on security. There are plenty of exploits for Linux, and most data leaks are hosted on Linux (but have nothing to do with the OS.)

To the degree to which software is a failure at the coding level (like every SQL injection, phishing, social engineering , sim swap, php, attack ever) the OS is irrelevant.)

In truth most all security has to do with people; programmers and users. The OS might be a nice scapegoat but it's not the root of the problem. Blaming it though helps to deflect attention away from ourselves though.


Running the world's critical infrastructure on verified, small, readable codebases rather than a diaspora of unvetted closed-source programs sprinkled across Windows and Linux systems sounds like a good start.


It wont matter unless the code is written specifically for the hardware and will only run on that hardware.


We would need to dump POSIX/WinXX and replace/upgrade them with something better, probably using an object-capabilities approach. WASI, Capsicum. etc.


> If we truly want secure products, we should shutdown all Windows machines?

There should be a period where you put a question mark.


Precisely.


What she is asking for is a radical economic restructuring of a free market.

It's attitudes like hers that lead to the federal government having the worst software imaginable. It just doesn't work unless everyone agrees to do it .. so good luck


Liability for a defective product is a “radical restructuring”? It’s something we have in almost every other category of business - not perfectly but pervasive enough that software is really conspicuous as an outlier.


Absolutely. OEM car parts suppliers have liability not just for their own product but whatever consequences happen downstream, like the cost of recalls, etc. And that makes sense becase liability is on the companies that are in the best position to ensure their product is correct.

Vendors of engineering software used by car makers, on the other hand, have no such liability. It's software so its the user's responsibility.


Now imagine software priced like OEM car parts. We'd all be running some heirloom version of Turbo Pascal.


You know that some of the most profitable companies in the world are software companies, right? Putting more resources into security and robustness wouldn’t mean a copy of Excel costs $5,000, it’d mean that Microsoft’s profit margins go down slightly and they ship new features slightly slower. The incredible leverage of software engineering would still mean that they’re amortizing those costs across a billion users.


"Technology vendors are the characters who are building problems..."

there are vendors building problems into their products? isn't that a crime?

What a silly take.

"House developers that build weak doors into houses are the real problem, not the burglars"

Security is an age-old problem, it is not a new concept. What is different with information security is the complexities and power dynamics changed drastically.

I mean, really! She should know better, the #1 attack vector for initial access is still phishing or social-engineering of some kind. Not a specific vulnerability in some software.


Physical analogs do not apply to computer security because the burglar usually does not live in a place where the judicial system can punish them. How are you going to bring some random hackers from Russia, China, India, etc to justice if those countries will not extradite them for their crimes?

When the judiciary is no longer effective the only option is to design secure systems or balkanize the internet so only those who can be punished under the judiciary can access the computer system (which is nearly impossible with proxies).


the dynamics are different but hackers get prosecuted all the time. they don't always live in non-extradition countries.


Hint: she didn’t say that they were building them intentionally. Skimping is building a problem even if nobody had a Jira ticket saying they had to leave out bounds checking.


Most vulnerabilities are not there because someone was deliberately negligent either. I see no evidence of a trend where vendors are "skimping" on anything.


You don’t think the vendors who are years behind on dependency updates are skimping? Not the ones who are still struggling to ship patches on a better than quarterly cadence? Not the ones who still ship 90s-style C code to enterprise customers who are paying high prices for their security products? Not the ones still writing new code in memory unsafe languages? Not the ones who still tell customers to disable SELinux? Not the ones who still refuse to use the sandboxing features in modern operating systems?

Companies love the idea that you can’t hold the liable for any defect they didn’t intentionally build in, but software is the extreme outlier where they were able to avoid consumer safety regulations and thus the expense of hiring people who can even tell when something is risky. Shift the cost back to the supplier would restore the market feedback mechanism which is currently missing, greatly improving the health of the industry.


Search MasterLock on Youtube and you'll see that people blame the lock maker rather than the burglars. That's because the lock maker is notorious for making insecure locks.

Social engineering is also something that can be protected against by developers, at least to some extent. Yubikey type 2FA is more resistant to that than user-input TOTP codes, for example. Nothing's bulletproof but it could certainly be improved in many cases. Wasn't some company that experienced a credential stuffing attack recently sued for not requiring 2FA for its users?

We don't tolerate house builders who says "well there's always some way water might get into the framing, so we don't need to bother installing flashings properly - a bit of caulking will do instead."


Even a yubikey isn't resistant to cookie-theft, plenty of TOTP code theft phishing kits exist. Users literally enable third-party apk side-loading and install malicious apks on their android phones because of social engineering. It's not new to security. If you wear a high-vis vest and carry a clipboard, you'd be let into even government buildings. that also is social engineering.

For your house builders statement, that is not a fair analogy. Is there evidence of a trend where software devs are saying "well there's always some other way of getting hacked so let's not bother doing things properly"?

You masterlock analogy is on-point though, because of "threat model", the purpose of most doors and locks in the US for residential use is as a lightweight deterrent. Burglars can just break the glass window and walk by the no-fence perimeter. You can and probably should get a more secure lock, but it is as strong as the door frame and windows, and whatever alarm system you're using. In other words, for that analogy (and for what the CISA boss is saying) to be valid, there needs to be evidence that burglars give up and go home when the lock is secure. I would even go further and ask for a proper root cause analysis. Do the builders of masterlock know how insecure their lock is? If they are indeed making it weak because of cost, then are they really to blame? they're a business after all. Where is the regulation for proper secure locks. As a government agency, CISA shouldn't be blaming vendors, that's a cheap cop-out. They should be lobbying for regulation and laws, and then enforcing them. So, in the end, even if the CISA boss is right, ultimately she shouldn't be blaming vendors but explaining what she's been doing to pass regulations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: