Hacker News new | past | comments | ask | show | jobs | submit login

With the number of data breaches we see cropping up, I wonder if a similar law could be written to hold companies liable for the safe handling of personal data.



TBH having software engineer output in general be liable to some minimum safety, correctness and quality standard would be a god send for the world.

But of course the developers will revolt against that.


If you want to call yourself "engineer" then at a minimum all those standards and minimum requirements should apply, no questions asked.

I've heard stories of hauling civil engineers out of retirement and in front of a tribunal after some structure that collapsed before it's time, or it was found that the original design was flawed in some way.

An engineer "signing off" on something actually means something, with actual stakes.

Of course the "developers" will revolt against this, because they are not engineers. A developer does not get to sign off on anything, and everything they do must be scrutinised by the engineer in charge before it can be deployed.

Doing this for your common run-of-the-mill CRUD app or HTML website is obviously overkill. Just like you don't need an engineer to replace a broken window or hang a new door. But when it comes to things that actually matter, like data safety and privacy, you must ensure you bring in the right culture (let alone job title).


> An engineer "signing off" on something actually means something, with actual stakes.

It's all about the social impact of a mistake.

The reason "engineers" sign off stuff is not because they are engineers. It's because society frowns upon people dying due to a faulty building, wants to see those responsible for the disaster punished and not be behind another project again.

Society's response to ensuring incompetent professionals can't sign off on stuff is to manage an Allowlist of individuals who can sign off, and have mechanisms in place to remove from the Allowlist those who screwed up badly.

These Allowlist are things like professional engineering institutions, and someone who is registered in the Allowlist is called engineer.

Now, does the software engineering world have anything of the sort in terms of importance in a society? No. A service crashing or a bug being sneaked into a codebase does not motivate a strong response from society demanding some people do not ever get behind a keyboard again.


> ... does not motivate a strong response from society ...

Is it not time things change?

How many lives have been destroyed by shoddy software? UK post office scandal? Therac-25? Leaked personal, intimate or payment data?


> Is it not time things change?

No.

There is absolutely no use case for misuse of any mundane service or desktop app that comes even close to match the impact on society of an apartment building collapsing and killing occupants.

> How many lives have been destroyed by shoddy software?

In relative terms, if you compare any grievance you might have with the sheer volume of services and applications running 24/7 all over the world... Zero dot zero percent.

You can't even make a case against using free software developed by amateur volunteers in production environments.


> misuse of any mundane service or desktop app

That's way, way over-trivializing the issue.

Self-driving cars absolutely will kill someone. The driver may be at fault for being distracted, but as long as the system allows for distracted drivers, the software is the process. See: Aviation post-mortems.

Medical equipment is another example. We do not need to go as far as Therac, there has been a number of near-misses which in the absence of independent post-mortems, are never recorded anywhere.

Drones and autonomous warfare also occupies many developers.

Social scoring systems, and taxation and payment systems, should also not be trivialized. Even if they do not make spectacular media coverage, they absolutely kill people. No indenpendent post-mortems to be seen, blameless or otherwise.


Social media would like a word. The suicide rates of teen girls have tripled since the introduction of social media. To say nothing of the corrosive force on democracy and our ability to make sense of the world as a society.

A building collapsing might kill 10 people. A rare tragedy. Social media impacts billions. Even if you want to downplay the negative impact on each person, the total impact across the globe is massive.

Software has eaten the world, for good and ill. I think it’s high time we treated it seriously and took some responsibility for how our work impacts society.


> The suicide rates of teen girls have tripled since the introduction of social media.

In what country? In what year comapred against what base year?

Disclaimer - not Australia anytime betwwen 1910 and now:

https://www.aihw.gov.au/suicide-self-harm-monitoring/data/de...

In the USofA:

    Adolescent suicides rose from 8.4 per 100,000 during the 2012-2014 timeframe to 10.8 deaths per 100,000 in 2018-2020, according to the new edition of America's Health Rankings Health of Women and Children Reportopens in a new tab or window from the United Health Foundation. 
https://www.americashealthrankings.org/learn/reports/2022-he...

So if it tripled there then, teasing out just teen girls, there must have been a reference point from which it tripled ...

Or is this just over zealous hyperbole?


> Or is this just over zealous hyperbole?

I'm referencing Jonathan Haidt's work: https://www.theatlantic.com/ideas/archive/2021/11/facebooks-...

But regardless, I find it pretty ridiculous to claim that us software engineers don't have a massive impact on the world. We've inserted software into every crevice of human civilisation, from my washing machine and how I interact with loved ones, all the way up to global finance and voting systems. You think technology companies would be the largest industry on the planet if we didn't have an impact on people's lives?

Leaving the social media point aside, all I'm arguing is that when harm actually occurs due to negligence, companies needs to actually be held responsible. Just like in every other industry that doesn't have convenient EULAs to protect them from liability. For example, if Medibank leaks the health records of all of their customers, they should be punished in some way as a result - either fined by a regulatory agency or sued by their customers. Right now, they shift all of the harm caused by negligent behaviour onto their customers. And as a result, they have no incentive to actually fix their crappy software.

I don't want companies the world over to look at things like that and say "Well, I guess Medibank got away with leaking all their customer's data. Lets invest even less time and effort into information security". I don't want my data getting leaked to be a natural and inevitable consequence of using someone's app.

Even from a selfish point of view, people will slowly use less and less information technology as a result, because they have no ability to know which companies they can trust. This is already happening in the IoT space. And thats ultimately terrible for our industry.


> I'm referencing Jonathan Haidt's work

Why not link to real numbers though? Haidt doesn't understand the numbers, misquotes them out of context, and mangles the data.


Oh, social media companies have an enormous impact on the world - through decisions made at C-suite and senior management level about what to demand of software engineers and how to deploy that work.

The impact by software engineers perhaps falls more in the "failed to whistle blow" category than the "evil Dr. Strangelove" box .. save for those very few that actually rise to a position of signifigance in strategic decision making.

That aside, the teen girl suicide rate underpinning your reference seems to be about 2x, from 2.8 per 100K (ish) circa 2000 to 5.5 per 100K in 2017

Jonathan Haidt links to this paper: https://jamanetwork.com/journals/jama/article-abstract/27358...

which doesn't include the figures. The full reproduced JAMA Research Letter from 2019 with figures and all is here: https://sci-hub.ru/10.1001/jama.2019.5054

As a research letter from JAMA I take that as fair reporting of some raw CDC data - I don't know how representative that result is in the fullness of reflection, normalisation, and other things that happen with data over time. To be clear I'm not quibblling and I thank you for the link.

Haidt also makes clear that Correlation does not prove causation and argues that No other suspect is equally plausible.

I'm 100% willing to align myself with the "social media as it stands is a scourge on humanity and young minds (for the most part)" camp.

I'm equally onboard with corporations are shit at personal data security and should be held with feet to the fire until they improve.


The link to mental health and suicide rates is far from shown, and could have any number of confounding factors.

Perhaps a better example would be the situation in Myanmar. It has been shown beyond doubt that it was in fact a genocide in the full meaning of the term, and that it was made much worse by Facebook.

Both by design where their algorithms are designed to maximize the impact of this type of social contagion, but also by their manned staff which were either unwilling or not allowed to help. Both situations are equally bad.


Not to mention building software that decides who gets health care, insurance or mortgage and discriminates based on bugs and faulty premises. And we're not even at Tesla killing people with software bugs.


> . The suicide rates of teen girls have tripled since the introduction of social media.

This is simply false.


All those engineers need to be hauled up because they’re killing people. Software engineers by contrast are high performance: barely any fatalities and so much value created. It’s why it’s not construction firms that are the most valuable companies but software companies. You can count on software engineers to build things that won’t kill, for the most part. Other kinds of engineers, on the other hand, are constantly killing people.

We need higher standards for engineers. They could learn from software folks. If you can’t do it safely don’t do it.


I have a friend who works in a hospital. Apparently the hospital software they use constantly freezes for like, 30+ seconds while they're trying to get work done.

Meanwhile my friend has had her passport, entire medical history and all sorts of personal information leaked by Medibank then Optus, within a few months of each other. Neither company as far as I can tell has been held in any way to account for the blunder.

Meanwhile the Post Office Scandal is rocking the UK - where a software mistake landed a bunch of completely innocent people in jail and led to multiple suicides.

And don't even get me started on the impact of social media.

We might not kill as many people as engineers. Maybe. But we certainly cause more than our share of harm to society.


Software engineers destroy plenty through data breaches, lost time due to (often deliberately) terrible design, etc.

If organizations and software engineers were held to account for data breaches, we'd have a whole new landscape.

Lives are lost to software failures as well.


How many lives have been destroyed by leaked personal or payment data?


> If you want to call yourself "engineer" then at a minimum all those standards and minimum requirements should apply, no questions asked.

I don’t call myself that so I’m all good. My general vocation is just “technologist”.

Plus: not a protected title in my jurisdiction.


I think you can prevent a lot of revolt by just finding a way of putting this other than "software engineer output" should be held to standards. The whole point of this article is that the individual is not the problem. Rather, the system should prevent the individual from being the problem.

Nobody wants software quality to be regulated by people who don't know anything about software. So, probably we should be regulating ourselves. It's not an easy problem though. Can you really state a minimum safety standard for software that is quantifiable?

- All tests passing? Not good enough

- 100% test coverage? Not good enough

- AI approved code? ha


I don't think it would be the developers that would revolt. Well some would for sure, but the real opposition will come from the C-level, as this would give their employees way too much power to say no.


I'm am engineer by many definitions, although I don't have or wear a stripey hat, so not the most important definition.

I'm not licensed though and have no signoff rights or responsibilities. If I were to consider signing off on my work or the work of my colleagues after review, the industry would have to be completely different.

I have been in the industry for more than 20 years and I can count the number of times I've had a complete specification for a project I worked on with zero hands. I can't sign off that the work is to spec it the spec is incomplete or nonexistant.

Writing, consuming, and verifying specs costs time and money and adds another layer of people into the process. The cost of failure and the cost to remediate failures discovered after release for most software is too low to justify the cost of rigor.

There's exceptions: software in systems involved with life safety, avionics, and the like obviously have a high cost for failure, and you can't take a crashed plane, turn it off and on and then transport the passengers.

You don't get a civil engineer to sign off on a one-story house in most cases either.


Any software system that might produce legal evidence should also be an exception… it feels a bit too much of a common case to call it an exception though.

(This is in reference to the Horizon scandal, yes.)


How do you define "minimum safety, correctness and quality standard"?


NCEES which administrates the regulation of Professional Engineers in the US made a Software Engineering Exam which had some standards: https://ncees.org/wp-content/uploads/2016/01/Software-Engine... However so few people took the test they discontinued it. But I at least liked the idea of it.


By having professionals agree on a minimal set, and writing them down?


So you propose no idea of "minimal set". If a liability is proposed, it has to be reasonably binary state: compliant/non-compliant.

Just like every time, there is no concrete proposal what constitute "minimal set". That's like "make education better", with no concrete plan. We can agree on goal, but on on the method.


There are several ways we could write a list of best practices. But the simplest would be to simply attach a financial cost to leaking any personal data to the open internet. This is essentially how every other industry already works: If my building falls down, the company which made it is financially liable. If I get sick from food poisoning, I can sue the companies responsible for giving me that food. And so on.

We could also write a list of "best practices" - like they do in the construction and aviation industries. Things like:

- Never store personal data on insecure devices (eg developer laptops which have FDE disabled or weak passwords)

- Install (or at least evaluate) all security updates from your OS vendor and software dependencies

- Salt all passwords in the database

And so on. If you locked some competent security engineers in a room for a few days, it would be pretty easy to come up with a reasonable list of practices. There would have to be some judgement in how they're applied, just like in the construction industry. But if companies are held liable if customer data was leaked as a result of best practices not being followed, well, I imagine the situation would improve quite quickly.

Its boring work. Compliance is always boring. But it might be better than the current situation, and our endless list of data breaches.


Software is in a very unique position. It can be attacked all the time with completely impunity.

Buildings are completely vulnerable even to simple hammer breaking windows, locks have been basically broken for decades

Food processor is easily vulnerable to poisoned potato.

Only the software has to take all attacks humans can come up with and withstand it.

What other industry has to deal with that? Maybe military, army camp in middle of enemy territory?

We have improved massively and will and should continue to do, but it's very different from other industries.

That's the core reason for breaches.

Some security checklist will help to filter most egregious idiots (especially upgrades from vulnerable versions).


If a system is attacked or fails in novel way, and the system design was done in accordance with best-practices, the follow-up investigation will note this, the system implementers will in all likelihood not face any repercussions (because they weren't negligent) and the standards will be updated to account for the new vulnerability. Just like other engineering disciplines.

Yes, things are constantly evolving in computer engineering, but how many of the failures that make the news are caused by novel new attack vectors, versus not following best-practices?


We don't live in a hard, binary world of extremes. Yes -- buildings and locks are, for the most part, easily compromised. But...

If I make a window of bulletproof glass, it's a lot harder to compromise.

If I surround my building with a wall topped by razor wire, it's a lot harder to compromise. (Unless, of course, I'm making an action film.)

Depending on what or who I'm protecting, I might find these solutions very valuable. Similarly in software, depending on what or who I am protecting, I might be very interested in various security measures.

So far, we've managed to escape an incident where bad actors have compromised systems in a way that's led to severe loss or harm. People are terrible at evaluating risk, so people tend to assume the status quo is okay. It's not, and it's only a matter of time before a bad actor does some real damage. Think 9/11 style, something that causes an immediate reaction and shocks the population. Then, suddenly, our politicians will be all in favor of requiring software developers to be licensed; liability for companies that allow damages; etc.


> If I get sick from food poisoning, I can sue the companies responsible for giving me that food.

Except that is not how food safety is achieved. There are strict rules about how food must be stored, prepared. How the facilities must be cleaned and how the personal must be trained. If these are not met inspectors can and will shut a commercial operation down even if nobody got sick.

And if they follow all best practices they have a good chance of arguing they were not at fault with your illness (besides the point it makes it much less likely to happen in the first place.)


> Except that is not how food safety is achieved.

Food safety is achieved via a combination of strict rules (equivalent of best practices) and liability in case of harm. There's been plenty of quite famous court cases over the years stemming from bad food being sold leading to people getting sick and dying.


> But if companies are held liable if customer data was leaked as a result of best practices not being followed, well, I imagine the situation would improve quite quickly.

One might argue that GDPR does exactly this, it holds companies financially liable for data leaks. Would you say it has improved the situation?


Well, I interpreted your question 'How do you define "minimum safety, correctness and quality standard"?' as to be about the process, not about the outcome.

I actually have not invested much thought into what the outcome should/would be as I think it's unlikely to happen anyway. So why invest time? But maybe ask the author of the original comment?


The EU is doing something like this with some proposed legislation. Basically it's a CE mark for software. The current version is very much a bare minimum, though. But it does impose liability on people selling software.


> But of course the developers will revolt against that.

Why would developers revolt? OTOH users would definitely be unhappy when every little app for their iPhone suddenly costs as much as an airplane ticket.


> and quality standard

This is my life: https://en.wikipedia.org/wiki/IEC_62304


Oh I also unfortunately know this standard by heart, hi!


Our employers will revolt because we will either cost more or be slower.

The price of an developer is already very high compared to other skilled labor


The agile-peddling consultants and managers would certainly be the first to the barricades.


As long as they're first to the stocks too, we're fine with it ;)


Would just cause more companies to outsource anyway.


It might do the opposite. Imagine if the company was held responsible for the quality of the software regardless of how it was made. I suspect it would be much easier to meet any quality standards if software was written in-house.


It would get outsourced to the first provider who can indemnify the company against any failures in the software. Whether or not any provider would dare to provide such a service however...


However it happens, it still attaches a legal & financial cost to lazy security practices. And makes it actually in companies' best interest to do security auditing. I think that would be a net win for computer security - and consumers everywhere.


In the EU this already happened and slowly but surely it is having an effect. Some US companies are still fighting it tooth and nail though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: