Hacker News new | past | comments | ask | show | jobs | submit login
Wide-ranging SolarWinds probe sparks fear in Corporate America (reuters.com)
313 points by kordlessagain on Sept 10, 2021 | hide | past | favorite | 148 comments



I've read all the comments, and as usual, no one's asked "what do other industries do?"

Money-handling, for example (banks, payment systems). If ever there was a Fraud Magnet, that's it. I've heard PayPal described as "a giant fraud-detection system, wrapped around a tiny money-transferring system."

And yet, they don't seem to be in the news all the time like "data theft" stories are. Could it be that the legal and regulatory and insurance systems have made it a manageable problem? Someone steals your credit card, your losses are capped. Someone steals your Personally Identifying Information, sorry, pal; change your passwords.

So maybe treating PII as the same thing, in every way, as money is the answer.


One useful move would be changing laws around identity theft so companies are liable for any costs incurred from their failure to verify identity, or for reporting credit issues from unvalidated activity. Americans worry about things like SSNs getting breached because they don’t want to get someone else’s bill — if companies were required to check photo ID against a real person (not an uploaded photo) that’d be a much harder crime to make financially viable.


Indeed, it is ridiculous that "identity theft" places a burden on the person whose identity was used - if someone opens an account in my name and the only "evidence" is having provided something that other people (e.g. my mother or spouse) can know, then in any dispute it should be illegal for that fraud/debt to appear on my credit report.

That's the way how most of the world has mostly solved identity theft, however, it's not that easy to implement in USA because there's no system of universal secure IDs in USA (by design) - there's a multitude of ID forms, some of them are not really secure (easy to forge, no verification if it was really issued by the institution who did so, no easy process to quickly verify online if the provided credential has been lost/stolen/revoked, etc), and there's a sufficiently large minority of potential customers who don't have a valid ID.

It would be helpful to have laws that clearly assign the credit fraud risk fully onto the defrauded companies instead of the people whose identities were used, as experience shows that this would rapidly result in improvements to fraud elimination (there's all kinds of measures that simply are not taken since they add friction), however, a proper solution does require a decent state-run identity system as the foundation of trust, and USA has made a political decision to not have one.


>appear on my credit report

The root of the problem is sharing the private information. Why your credit reports are shared among completely different entities? Nobody want to gives them a consent to share your private information.

> system of universal secure IDs

Actually, it's the opposite. US has universal ID(not secure though). That's the problem. If there exist one idiot who doesn't verify your identity, everything fails in chain reaction, because everybody else believe the idiot.


> US has universal ID(not secure though)

Do we? Our SSN is not a unique number, and not just because the keyspace is too small for our population. (It's worse: some of the prefixes are geographically related.)


SSNs have not been issued with a geographic prefix in a decade. I'm not sure if you're really suggesting two different individuals are issued the same SSN, but no, they never are. SSNs are never reused.

https://www.ssa.gov/employer/randomization.html https://www.ssa.gov/employer/randomizationfaqs.html


> SSNs have not been issued with a geographic prefix in a decade.

Even if what you say were true, handwaving this as unimportant because it only affects people over the age of ten seems a bit silly.


SSNs are entirely insecure as mentioned earlier in this thread. Geographic prefixes are a drop in the bucket on that front.

The post that I replied to is almost entirely incorrect as it relates to available SSN “keyspace” (misnomer), uniqueness, etc.—-for which geographic prefixes and group numbers are relevant.

I supplied direct sources, so no idea what your “even if what you say were true” skepticism is rooted in.


With "ID" I mean "a photographic ID document used for verifying one's identity" - SSN (or, really, any "something you know" factor) is not an insecure ID, it doesn't even attempt to be one.


>Nobody want to gives them a consent to share your private information.

Freeze your credit. Burden shouldn’t fall on the consumer, but it’s easy and easy to lift when needed.


The problem is knowing when to take action. It could be months after the actual event before you find out about it. And you’re liable for everything in between.

Identity Theft is actually just fraud. And the companies that allowed the fraud should be required to shoulder the burden of addressing that matter with the actual people who committed the fraud. No part of that burden should ever be placed on you, just because someone pretended to be you and commit a fraud.


I wish proving your identity used asymmetric cryptography. So that if one company's database is compromised, your identifier key isn't compromised, just a public key and/or some value signed using your private key.


Oh, trust me, I know that this is a self-inflicted problem — we have too many people who subscribe to conspiracy theories about things like the “mark of the beast”. It's just somewhat impressive to see how effectively companies created a new category of crime to direct attention away from their negligence.


See I do understand the distrust of the state with the ability to cut people off from society, by revoking an id for example. Especially if there are laws around the ID checks being mandatory (which I am generally against).

But I think this is mitigated as long as it’s optional for a company. The company is held liable for any fraud that they allow. The company has the option to use the government ID to prevent fraud, but they can also assume more risk and take on a customer without the “official” gov ID, if they want to.

I can see this resulting in something like creditors saying: “either you can use a govID to sign up for this credit card, like normal. OR you can send us a $10k deposit and forego the govID entirely, if you like.”

This solution makes it so that companies are held more responsible, but decreases the risk of having more government power by making it a decision for the company’s “risk management team” to decide.


> See I do understand the distrust of the state with the ability to cut people off from society, by revoking an id for example. Especially if there are laws around the ID checks being mandatory (which I am generally against).

How does that not already happen, just inefficiently? It's hard to function in the U.S. if you don't have a Social Security Number — that's why people bother using someone else's — and we already have a de facto ID system for most people but it's a patchwork at the state level which was somewhat federalized with RealID.

It's hard to imagine an environment where people would unjustly be “cut off” where the state level system would prevent abuse which would otherwise happen — it's not like, for example, California stopped politically-motivated DHS activity during the Trump era.


I don't remember where, but Ross Anderson said something like "It's not 'identity theft', it's personation.".


Identity theft or bank robbery? https://youtu.be/CS9ptA3Ya9E


Yes, a fine explanation from Mitchell and Webb, worth keeping in mind.


> theories about things like the “mark of the beast”

My hope is their anti-vax research eventually leads them to learning about DNA.


The problem isn't with a person having a UUID of some sort (of which their genome is one). The issue is that the Book of Revelations talks about a Mark people will need to have stamped on their arm and/or forehead in order to be able to conduct business. I.e. it's a problem of allegiance, not authentication.

So, in practice, anything that pattern-matches to "people will need to carry some sort of token given by a big organization (private or public) to pay or be paid for goods and services" will be viewed by some as the Mark, or a slippery slope towards the Mark.


Last I checked, the anonymous web was effectively dead. Or do these people not use the internet either?


They do, but most people aren't aware how Internet works. Hell, most people haven't learned the concepts necessary to comprehend what information is (personal or otherwise) and how it behaves - not in terms of technology, but as a fundamental component of reality.

Anyway, the Mark as described in Revelations is pretty... bodily, for lack of better term. It evokes the image of getting a barcode stamped on your arm or your forehead, in exchange for swearing fealty. The Mark feels like a concrete, physical thing. That's why things like "government ID" or "payment chip in your arm" pattern-match to this prophesy for so many people, while things like "mobile phone number" or "e-mail address" don't.

(There's also a factor of scale/graduality. For people alive this century, countries and governments were always a thing. A big thing. Banks too. The governments, the UN, the international financial system - they look big, evil, and pattern-match to the Beast. In contrast, for most people alive today, mobile phones and e-mail addresses were something they've seen introduced gradually, from great many independent vendors. They don't have this obvious Beast-like quality.)

Source: grew up as Jehovah's Witness. While I obviously can't speak for all fundamentalist Christians, and while JW teachings don't consider government IDs to be the Mark[0], I got pretty familiar with the patterns of thinking people show around this topic.

--

[0] - They do, however, believe that the Beast described in the Revelations is currently embodied by the United Nations. So if the UN ever proposes a common ID scheme or an electronic payment system, I'm pretty sure plenty of Witnesses will throw a fit.


If it would be true research, sure. But it is likely looking for anything that looks like supporting of the theory and ignoring everything else.


To be fair, all the anti vaccination people I know are solidly liberal, non-religious types who believe in "natural" medicine, etc.


if companies were required to check photo ID against a real person

It'll be really hard to convince people to give up the convenience and higher returns of online-only banks.

A better option would be using cryptographic digital signatures by an HSM (smart card) to verify ID for financial services.


We have online banking here in Norway, have had for decades in fact. We don't have any problems with verifying ID.


Video ID verification works quite well in many cases.


It's not theft. It's fraud.

Framing the action of defrauding a person as a downstream effect of being a victim of theft, is nothing short of institutionalised victim blaming. No, just no. The person was defrauded because of stolen identity and/or payment info documents. They were not a "victim of theft": they were a target of fraud.

Calling it theft is a sleight of hand to absolve banks, payment providers and businesses from their responsibilities.


This is fundamentally why "identity theft" exists at all. It's slight of hand for moneyed interests to shift the responsibility of being robbed onto their customers. It's no wonder "identity theft" is such an issue since there's so little incentive to use a not broken means of verifying identity.

Using a secret number to verify identity is absurd and hilarious.


> So maybe treating PII as the same thing, in every way, as money is the answer.

How? Money is fungible, PII is very much not. It's not like they can give you a new identity if your identity is stolen.


I doubt about PayPal’s anti-fraud capabilities. They allowed to open another account with the same name and address as mine, but with different phone and email, without any confirmation. And that person bought something, and after it wasn’t paid, gave my information to collectors…


What would be the cap for losses from losing PII.

Credit cards generally have one use: payments. Usage is not difficult to quantify. The card is generally worth the same to whomever is in possession of it.

PII has a multitude of uses. The prices offered on the black market for PII do not reflect its value to those that it identifies or those from whom it was stolen.


If we treat PII as money, it implies the loss can be clearly quantified and loss can be compensated for.

Banking industry isn’t better because they have solved the problem, they are just hiding behind the fact that victim can be compensated and hence an insurance can cover all risks


I've heard Paypal described as a "giant fraud system"


But GDPR and cookie banners forced me to stop selling my startup services in Europe :(. /s


Good. Security will only be a priority when it's more expensive than profit.


That's part of the definition of good security engineering. Protect stuff up to its value, and never spend more money than what is needed to rebuild it from scratch.


Yep. This. Couldn't agree more. I went to a BSides talk years ago titled "Does DoD Level Security Apply to the Real World?" ~ In summary, Yes.

The premise of the talk, as I understood it, was that too many small operations or "mom and pop" shops think that they do not need "Department of Defense" level security, because they're a small general store, not Fort Knox. That's a misconception. "DoD Level Security" doesn't mean that you protect your place like the NOC list in Mission Impossible; it means that you are proactive in thinking about your thread model and assessing the value of your assets. If, after proactively thinking it through, you're still comfortable with just a cheap pad lock and no alarm system, then you've applied "DoD Level Security" (or something like it).



Capping spending on security to the cost of rebuilding from scratch implies that total loss is the worse thing that can happen from a security breach. That isn’t true. A security breach could be more costly than a total loss.


I think that for-profit organizations actually mesh quite perfectly with the "security economics" perspective. Ie, they care about security to the extent that they see it affecting their own utility function. In ideal circumstances, negative externalities like the impact on the breached users flow back into the company's incentives via bad PR. The problem is that there's a shortcut: it's inherently easy to hide security breaches, given that the security domain already involves a baseline level of opacity (as opposed to, say, product or pricing decisions). As it often is, the approach here should be to reconnect this feedback loop, by regulating and vigorously enforcing penalties for failing to disclose breaches. Suddenly, the "value" of security from the perspective of the organization drops precipitously. To make matters worse, hiding security breaches causes collateral damage by making mitigation by its victims harder (if no one tells me my SSN was leaked, I won't (eg) freeze my credit report).

The answer, as it often is, is for regulatory pressure and robust enforcement to connect the externality's consequences back to the agent. The easiest step is by requiring disclosure of breaches. As such, the news in this article seems like it should be unequivocally celebrated.


The problem with the statement is that value function might be quite different for the company vs the impacted user.


Protect stuff up to its value, and never spend more money than what is needed to rebuild it from scratch.

Oh, but the problem appears when you'll holding other people's information. "Your SSN ain't worth much to me, sorry, keeping that pipeline open only matter X much to our bottom line," etc. .

"Good Security


Thanks, yes. Like I said in my other comment: if you keep other people's money there are laws and rules that apply to you. You may not be negligent with it. The phrase "fiduciary duty" comes to mind.

Yet somehow, keeping their PII imposes almost no obligations on you at all.


I will only add:

About god damn fucking time.


Indeed, liability for insecure and careless software proven to allow cyberattacks is what is missing, only then will managements start to care about which programming languages and development processes they take into use.


Or when security actually becomes profitable in itself.


Security is profitable. Very profitable. It's likely one of the reasons a lot of companies avoid it....it's very expensive and most don't see it as adding to the bottom line because it's largely invisible, but taking from it, until something major happens.


That's why dapps on Ethereum and the likes are and will be way better than any alternative.


And business cannot operate at a loss, so increased expenses will be passed on to customers. Yay.... right?

If we can make security lapse expenses higher and higher we can all pay more and more until all products are completely secure but no products remain....


Yeah, who needs those aircraft safety regulation, where is my Boeing Max-Max with 50% chance of taking a swim mid-flight?


The vast majority of software security issues don't kill people. Trying to price them higher than current levels will add cost to goods, no?


Good cost you in ways other than their price - unsafe electrical goods can burn down your house, leaking your personal data can cause you to be robbed or defrauded, food can cause poisoning, etc.

So you might save $5 on the price of the "smart doorbell" and then loose $50,000. Obviously there is some kind of balance that needs to be struck, but the amount of data leaks and fraud is plain out of control at the moment,


Same applies to food regulation, restaurants, supermarkets, goods that don't last the warranty time,....

If a business cannot manage, it closes, as simple as that.


People are worried about the wrong stuff. SolarWinds was bad but it was likely intel operation. They wanted access to networks for intelligence purposes. They jacked it so they could access assets behind corp firewalls. Spys will always try to spy.

IMHO the Kaseya hack was far worse, maybe worse than WannaCry but with better outcomes. This was a criminal operation, provided by criminal software suppliers that really was only resolved when the keys were leaked on a forum.

The rumor is that local intelligence forced the disclosure of the keys (eg: guns to heads), because this is pretty much the destroy the world scenario that is unstoppable. It is easy easy for attackers to cause billions of dollars of damage in a day.

Its not getting better. It can't. Our systems are designed for large scope of trust with massive surface areas. Security is a game where the defenders cannot mess up once. Its hopelessly asymmetric and can never be better.


we can start by repealing laws that give corporate entities immunity when data is leaked. Make them liable for lawsuits with set minimum damage amounts for exposed data, and one would be able to watch the money flow into better tech security on a society-wide scale.


We can also add a prohibition on three-letter agencies installing purposeful backdoors which are later exploited by criminals. Maybe it's time they actually were helping regular citizen protect themselves and their privacy, instead of playing chicken with their counterparts abroad.


Simpler and more effective solution: Do as JFK suggested and "splinter the CIA into a thousand pieces and scatter it into the winds".


...and we saw what happened to him. JFK and RFK's death together were a foreign and domestic policy coup. Ask yourself, by whom?

Coincidence theorists are the real crazies.


I support that but … how often has that happened? That Juniper incident didn’t seem to be widespread and it certainly doesn’t appear that a notable percentage of breaches are due that kind of thing.


Except we don't know most of the hacks going on, so we definitely don't know how they happened. Eg we'll never know how many hacks were due to Debian's SSH fiasco but I bet you it's far from zero.


We don’t know everything but think about how many we do get details about showing nothing of the sort. It seems conspiratorial to assume that this happens often but is always hushed up.


What laws do you imagine grant corporate entities immunity? The companies are victims of the crime in this case, along with their customers. There is no special law that grants corporations criminal immunity from falling victim to a crime, because that’s not illegal.

If you look at how this sort of thing is regulated, there’s two general approaches. The first is creating a category of data that requires special protections, and defining a standard for protecting it. Either through legislation (like HIPAA), or self-regulation (like PCI). The other is to specify a requirement to protect all PII, but not define any specific standard for protecting it, only prescribing penalties for failing to do so (when seems to be what the EUs regulatory approach is).

Both of these approaches are problematic.

Is it self-evident that any breached data was not sufficiently protected? I don’t think any experienced professional would agree. It is impossible to build a system that is completely protected from being potentially compromised, and it’s possible for a largely unprotected system to last its entire lifespan without being compromised. So the simple fact that a system has been compromised doesn’t necessarily reveal any information about how adequately protected it was.

On the other hand, is there a single security standard that’s widely regarded as being good? I don’t think there is. The ones that are generally regarded as the best I would personally consider to be not bad, but not great. One size fits all solutions tend to find a lot of not fit for purpose use cases as well.

It’s also not apparent to me at all that spending more money on security achieves better security outcomes. I’ve worked in numerous large enterprises that spend enormous sums of money on security budgets, and manage to achieve very little with it. So I don’t think you’re going to get much consensus on that being a suitable metric for how adequate a company’s security systems are either.

You could easily devise a system that punishes companies for falling victims to these attacks. But that’s the only outcome it’s going to achieve. A punishment for being the victim of a crime.


Thanks for the Kaseya reminder - it had vanished from my memory. For a period, these attacks seemed to be coming thick and fast. According to wikipedia [1] :

9 July 2021 - phone call between Joe Biden and Vladimir Putin. ... Biden later added that the United States would take the group's servers down if Putin did not

13 July 2021 - REvil websites and other infrastructure vanished from the internet

23 July, Kaseya announced it had received a universal decryptor tool

I'd love to read the real story behind that. Perhaps "guns to heads" did happen.

[1] https://en.wikipedia.org/wiki/Kaseya_VSA_ransomware_attack


> SolarWinds was bad but it was likely intel operation.

s/likely/definitely/

Yes, tens of thousands of companies were hit by the trojaned update. IIRC, only 30 or so companies were then exposed to the second- and third-stage malware. The attackers were very careful: for grand majority of affected companies the functionality was disabled shortly after infection.

I have heard two plausible theories why this might have been the case. One is that the attackers wanted to avoid detection as long as possible, and the ongoing additional egress traffic would have been easy to detect. Another is that they wanted to protect their infrastructure from being flooded and their collection systems from getting overwhelmed. Personally, I think it's a bit of both. If you're after intel and want to find needles in a haystack, the last thing you want is truckloads of more hay.


At some point, "corporate america" decided that willful ignorance was better than making an effort, possibly failing, and possibly being held liable for that failure.

Its annoying that there's law for people, then there's laws that apply to some corporations, but not always and not all of them.

"Maintaining an attractive nuisance" is what they tell people with unfenced junkyards, right? Why couldn't that apply to some of these folks aggregating data about our kinks "unwittingly" displaying the results to the world.


That’s how most companies do most everything. If they get big then they’ve figured out a systematic way to win at one or more games in business. Everything else is just enough of a shit show to get by.

As with all aspects of modern business operations “how to do it right” has been crowed about for decades by experts who care. It’s just that nothing matters until it matters, such as waste disposal, workers rights, product safety, etc…

If you show me the incentives I’ll show you the behavior. The only way we will ever get data security to matter more than theater and “check the box” is for the obvious to happen (bad consequences).

We don’t have a Ralph Nader.

This is why I’m against responsible disclosure, accepting below market payouts on bug bounties, and generally treating companies with any modicum of trust. Until it hurts so bad that people are on the steps of the capitol building beying for the blood of CIOs will we see meaningful change.


Hmmm Ralph Nader? The guy that went after GM for safety and helped establish a new agency that studied the car he used as fodder for his theatrical campaign only to find it had no safety issues. Ralph Nader is already a laughing stock and a warning to all in the history books.


oh, wow. I already thought the job of CIO was overwhelmingly stressful! I think the job description is: try to create some guardrails but worry constantly about events way outside your control ruining everything.


No it’s primarily vendor management (according to the CIOs I’ve interviewed).

When you have a network security department unable to articulate its policies, which relies on vendors for everything including expertise, you damn well should worry.


There's really just laws for some people and not always and not all of them too.


The Law embodies the will of the ruling class. This is what we were taught back in school.


More like they wanted to avoid mob panic. If the corporation was hacked and kept it on the DL, but boosted security posture in response, is that a bad thing? If they were hacked and did nothing, well, screw them. Perhaps the SEC should couch the expectations with a bit of reassurance.


> If the corporation was hacked and kept it on the DL, but boosted security posture in response, is that a bad thing?

If it's a public company, it's securities fraud. IMHO, securities law is the most effective tool at the moment in encouraging improved security engineering, best practices, and posture.

https://www.sec.gov/news/press-release/2021-154

""As the order finds, Pearson opted not to disclose this breach to investors until it was contacted by the media, and even then Pearson understated the nature and scope of the incident, and overstated the company's data protections," said Kristina Littman, Chief of the SEC Enforcement Division's Cyber Unit. "As public companies face the growing threat of cyber intrusions, they must provide accurate information to investors about material cyber incidents."

The SEC's order found that Pearson violated Sections 17(a)(2) and 17(a)(3) of the Securities Act of 1933 and Section 13(a) of the Exchange Act of 1934 and Rules 12b-20, 13a-15(a), and 13a-16 thereunder. Without admitting or denying the SEC's findings, Pearson agreed to cease and desist from committing violations of these provisions and to pay a $1 million civil penalty."


"If it's a public company, it's securities fraud. IMHO, securities law is the most effective tool at the moment in encouraging improved security engineering, best practices, and posture."

That's just a fucking sad state of affairs. Apparently they owe nothing to their customers.


They are contemptuous of customers, viewing them simply as dumb cattle to be manipulated.


If what's being disclosed is of the nature of the Pearson hack (theft of student records) then great. But there are probably thousands of hacks that don't result in the disclosure of PII or other confidential information.

I can understand companies being worried that a compromise of a test system with no access to sensitive data -- which they normally wouldn't be required to disclose -- could make them look bad. But at the same time they're all being required to disclose this info so at least there's safety in numbers.


I agree that a breach of a test system with no access to sensitive information or digital property (information, source code, binaries, etc) and no ability to pivot from said test system to other systems should not require reporting. To pick an example, that's not what's happening with S3 buckets and Mongo instances (where vast amounts of personal and or sensitive information is being leaked). That's not what happened with Equifax, T-Mobile, Solarwinds, Colonial Pipeline, Pearson, CNA Insurance, etc. You have to hold the feet of these businesses to the fire, and if they don't perform, dissolve them after repeated regulatory failures (just as Arthur Andersen had happen after Enron's failure, or FDIC would part out a bank after insolvency).

https://www.reuters.com/technology/hackers-demand-70-million... (July 2021: Up to 1,500 businesses affected by ransomware attack, U.S. firm's CEO says)

(disclosure: infosec practitioner)


I think how and why the breach occurred matters more than what information was accessed. In asset management, for example, when you’re dealing with an error you don’t just look at the dollar amount. Maybe the error only cost a couple thousand dollars today (or maybe it even made money!) but the exact same error on another trading day could just have easily been ten, or a hundred, or even a thousand times more costly. That the error happened at all is the material event. And that’s why there’s no such thing as a de minimus trading error. Sometimes you just get lucky in the magnitude of the impact. Even if it didn’t cost you anything you still need to address the weak point that allowed the error to happen in the first place.

So even if a system with absolutely no information was breached if your other system(s) use(s) the same or similar security then it doesn’t really matter that nothing was taken. The breach could still material (and require disclosure) because it’s exposed a material security vulnerability.


Lots of nuance that can’t fit into a single thread.


> If the corporation was hacked and kept it on the DL, but boosted security posture in response, is that a bad thing?

A public corporation has a legal and fiduciary duty to its owners other than hiding what happened.


Zero accountability, complete lack of responsibility, and a total absence of sufficient security in far too many instances. Something like this has been long long overdue.


Yes. But -

I’m skeptical that a probe will have any more teeth than the censorship testimony fist shaking we’ve seen at Zuckerberg and Dorsey.

Even if there is, the solution needs to be punitive. That if you ship shitty software and didn’t follow good practices that you’ll be investigated and fined. New frameworks for what constitutes software negligence.

The last thing I would want to see is software regulation, oversight of development, Government access. For about a dozen reasons each.


Interesting I've always wondered if we could hold software developers to the same standards as we do medical practitioners.

But of course with the extreme shortages companies will basically hire anyone fresh from college and the level of responsibility from people in the industry is low.


I like the idea. But isn’t it a common complaint that at a company there are very few people there that really have a wholistic view? That it would be as hard to bring a regulator in to inspect as it would be a new hire?

Compared to engineering where you often see the same things from job to job.


While PCI is its own bag of worms, part of the certification process is to describe the architecture to an outside auditor. It's annoying and companies can (and will) complain all they want, but without meeting that requirement, the company can't say they're PCI compliance. Which they want to be. So they meet that requirement.


This is why real engineers complain about developers calling themselves engineers.


Software engineers are engineers in the sense that we employ the same scientific methodology to design systems to a target level of reliability.

However, because the majority of software does not affect the safety of human lives (the exceptions being software that operates critical medical equipment, avionics on aircraft and rockets, etc.), the target level of reliability is not nearly as high as in the engineering of objects that do affect the safety of human lives (like buildings and bridges). Humanity also has centuries (indeed, millennia) more experience with the construction of physical objects, although rigorous scientific design of them only began in the last few; and even modern era engineering fails to account for all factors (e.g. the Tacoma Narrows bridge that collapsed due to oscillation induced by wind: https://en.wikipedia.org/wiki/Tacoma_Narrows_Bridge_(1940) ). Modern engineers operating today still make mistakes that cause death, such as the collapse of the construction crane in Seattle in 2019: https://en.wikipedia.org/wiki/Seattle_crane_collapse (due to wind and unsafe operation IIRC).

If you're building a website or app where people can order food from restaurants and creating a market for couriers and consumers to connect, then no aspect of your software has any affect on human life. (The software might tell the courier where to drive, but they are responsible for driving safely to those locations.)

When we consider software that is responsible for the operation of autonomous vehicles, or the avionics on aircraft or rockets, then the level of engineering reliability is targeted to match or exceed the reliability of the physical systems. A great article on the software engineers who worked for NASA on the Space Shuttle's software: https://www.fastcompany.com/28121/they-write-right-stuff

A software engineer is a person who can build software to a target level of reliability. That the targets are not always as high as "responsible for safety of human life" does not mean it's not engineering. I would give AWS's automated theorem-proving about the correctness of their TLS implementation as an example of a feat of software engineering that targets a high level of reliability: https://aws.amazon.com/blogs/security/automated-reasoning-an...


The implementation of anything in the last paragraph would be a shit snowball of the finest degree.


Yeah, god forbid if the public has access to source code of critical systems that the country relies upon to run critical infrastructure like oil pipelines.

We might loose any respect for people in charge whatsoever.


Just like all the transparency from all the other gov agencies right?

And I think the discussion is about private businesses not some forced open source ideal you seem to have conjured up.


It is your private business freemarket ideal when half the country comes to a standstill because the oil pipeline runs on windows XP or some shit?


Are you implying that Windows XP needs security reviews now? Or that regulated energy markets make Windows XP? You lost me with any sort of relevancy there. As I understand it, the topic is about software mfgs. Did you just want to throw some freemarket attack (while not using a freemarket example)?


The age old advice is "don't talk to the police"... I imagine that goes wayyyyy more for talking to the SEC. Of course they're fearful.


I have noticed "don't talk to the police" being repeated often around here lately, with links to the youtube video. While it is probably good advice for your general day-to-day encounters with police, I don't think it is great advice for a executives of the corporation dealing with the SEC.


I think the whole sentence should be "don't talk to the police without a lawyer / let the lawyer speak for you". And in the case of an executives talking to the SEC you should absolutely have a lawyer, or multiple, with you.


I was audited by the “tax police” at the IRS. We had a constructive conversation, I fixed the problem, paid some more taxes and was done. I don’t think this advice applies to all government investigators.


Yes, multiple people that I've known have been audited by the IRS. All were small business owners who did their own taxes (vs having them done & submitted by a CPA). At the end of all the audits, the IRS ended up writing a check to the businesses. (If you can find more overlooked deductions the exceed the overlooked taxable items, they must make the adjustments in your favor.)

One good friend got audited five years in a row; maybe the local bureau chief was just sure he was up to something. The last time they were writing a check to him, it was going to be for less than $2, and the agent asked if they really wanted it — "Of course I damn well want you to write that check!".

I've had a career mostly in small businesses, and always had a CPA do it with never an audit. I strongly suspect that it not only likely gets me more proper deductions that I'd miss, but also gets a lot of points in avoiding an audit, since the CPA is also putting their license on the line by signing it. I'd recommend the practice, just find a good one who charges flat rate (they do exist, just takes some looking).


You can always be lucky. I was audited a few years ago, they didn't find any issues. So, just for fun, they added a special audit 3 months later. Needless to say they didn't find anything that time either.

Don't talk to the police or the IRS. They are never aligned with your interests, and whether you get a reasonable person or someone who just loves to ruin your day is random.


Let me precise, I was audited because I forgot a 1099 form. It was a stupid mistake on my side. If you’re being audited just so they can root around and find a problems, then yeah be careful.


I've never been audited, but isn't it more a case of "you must talk to the IRS or they'll simply collect what they think you owe and leave you no recourse"?


Sure, you have to provide the stuff they request, just like you have to comply with police officers in the moment, but you don't need to (and should not) talk "help" them or give them any information beyond what you're required by law, which is often your name and not much more.

If our local IRS equivalent contacted me "to clear something up", I'd defer them to my tax consultant in all cases. Assuming good faith with state employees is often a costly mistake.


You'd be surprised. At my employer, I'm told before every meeting with internal and external auditors to not offer unrelated information to them.

Basically, give them only what they ask for and exactly what they ask for.

If no one sees something, it doesn't exist, right?


Actually, not talking to or interacting with the police sounds like the most rational advice for American's right now.


It’s time to fix software security. And it’s gonna be hard.

First, there is no unbreakable software. Second, software is written by average people vs above-the-average people who are hacking it. Mission impossible.


There are really two problems that could go under the name of "fixing software security":

(1) How do you improve the state of the art, so that, if a company is serious about security, they can succeed?

(2) How do you fix the way companies are run so that they actually even try to take security seriously?

Both are big contributors to the overall problem.

I do think there is room for improvement in #1, so it's something we should be looking at. But we could get a lot of mileage out of #2 even if there were no way to move the needle on #1.


Is there evidence that the average hacker is smarter than the average developer? I would expect the opposite to be true because legitimate work seems more profitable/stable, but also I'd imagine the difference is t that high either way


I think the incentives are lopsided. The developer does not personally bear the blow of their company's data breach (unless they're dedicated cybersec personnel) whereas the hacker reaps all the reward of getting access.


The parent never used the word "smarter". By definition, the average developer is developing applications, but the non-average developer is doing something else, possibly hacking. Hacking is not the average activity (the way that word is used today).

With regards to skill sets, I have repeatedly found that people who engage in hacking range from skill sets of "knowing how to use a hacking kit" to "uber developer with security knowledge". There is a wide range of skills and knowledge.

However, it is practically an entry requirement for someone in the security space to view software differently than most programmers. That is defined as non-average.


They didn't say smarter but they did say above-the-average which implies better (as opposed to worse), rather than it being a different skill set. That is to say, I know exceptional "hackers" who can't code their way out of a pair bag, or build any sort of GUI. Similarly, I know some really good programmers who don't intimately understand how computers work a tenth as well as hackers do. There are genuinely smart people in both camps, but they're different skill sets.


Of course not, this is just the parent poster's opinion. The truth of the matter is that there exceptional individuals who decide to get into software development and software security. The problem with software is that often companies don't invest into securing their software, and that has to be a priority. Perhaps having the SEC force fines for not securing mission critical software is the first step?


I remember after finishing our CS studies we were taken by the Army to take a day long test. We were warned better fail the test unless we are willing to be enrolled. However this might be an isolated case.

In turn, I guess a security professional is more scarce than an average developer. The question is if all security professionals are hired to strenghten systems, or some of them to break it.


Yes, large engagements frequently include a "red team" who's job it is to try and break into the system.


I used to think this way, it can be really dangerous to assume level of intelligence from background information.

More to the point, hackers can be very motivated to break things in a way that the average developer is not motivated to secure them.


You hit it. Cracking a system that has mountains of attack vectors is interesting, fun, and potentially profitable. There is so much to probe and try. Flip it around and look at the priorities of software developers Ina corporate setting. Most of them are so sick of dealing with security hoops and theatre it's the last thing they want to think about.


Sorry, this is not true.

The real issue is that software has many bugs as the sum of all contributes to it, and all it takes is finding one.

What I mean is that it takes just one sloppy developer to introduce a bug, and that's all you need.

Making unbreakable software is a much harder task than breaking it.

It's not about who's smarter, it's about what's easier.


Sloppy or extremely busy and/or over-worked. Throw a few desk side jobs at me while I'm maxed out with story points and I can practically feel the bugs flowing into the code. I'll get it all done but...someone is going to pay later.


Yes, but this doesn't confute what I said, it only exacerbates the situation.

I wrote code with no pressure, tested it used it for months, and with some input that I didn't think about it broke spectacularly... I might be below average programmer (although I would say no, from what I have seen so far), but as I said, the chain is just as strong as its weakest link.


I still believe it’s about who does what. Code written by an average developer is breakable by a better skilled developer. Vice versa is not true.


That’s not how any of this works. At all. Not even slightly.

Like, what? “Thank god, I only hire developers with a skill level of 78 so it will take a developer of skill level 79 or above to even have a chance at finding a bug here, and we all know skill level 79 developers are rare so I’m secure!”

This isn’t an RPG, life is full of unquantifiable things and differing conflicting incentives. If anything, the domains in which exceptionally skilled developers often work and the tools they use make bugs more common.

It doesn’t take a Linus Trovalds to find bugs in Linus Trovalds’s code.


I strongly disagree... also superstar make mistakes, and cannot know every single quirk. In particular when you have languages (like C++) with tons of "features" that enable very weird side effects in very weird situations.

By the way the worst clusterfuck I ever saw was caused by a very talented programmer that implemented a very complex object store on disk... the thing was brittle at best, and it failed in very spectacular ways.

Very entertaining to debug


Maybe the cost equation becomes more evident to companies:

dedicated above-average internal* cybersec staff < (SEC fines + outcry when breach goes public)

* external seems like a different can of worms. perhaps someone in cybersec can refute/expand


There is no one to hire. Brainless monkeys are getting paid deep into six figures to watch dashboards and do security scans and check off boxes.


More than avg vs above avg I would say it's building a cards castle vs making a cards castle fall. The latter is way easier.


And as a card castle toppler, you only have to find the most unstable one in a group. Perhaps to many companies it seems like a revenue sink to implement proper security. It Probably Won't Happen To Us™ and so forth. Okay. Maybe it seems like a more concrete return on spend if a company were to frame the goal as trying to be at least as fit as the average of the herd.


as of April 2021 Solarwinds still shows up in Gartner reports read by managers and 'thought leaders.' until they start losing prestige in the trade rags you can expect them to endure as a corporate standard, best practice, industry standard, and "enterprise grade" solution regardless of what common sense and competent system administrators at your company say.


I'm wondering if anyone here on HN actually knows if any of this has teeth and if corpos are legitimately worried or if it's more of a feint. I'm generally skeptical of SEC enforcement.


- Offer amnesty / limited liability / zero liability for losses following breaches

- require full disclosure to national data registrars following breaches from now on.

- Make source of income as big a deal as KYC

- make KYC a "walk in the branch with some photo id". How many of us really need to borrow thousands without going into a store or bank.


I'm very confused about your last bullet point. I don't use physical banks for my finances and do all my finances online (like many people). I regularly move thousands of Euros and don't see why I would need a physical bank. That would severely impact a lot of people.


KYC - Know your customer. Someone opens a new account prove you know who that person really is. Basically photo id in branch will end "identity theft", and similar beneficiary owner rules more or less ends money laundering

fundamentally, we have as much crime as we are willing to let the banks allow.


For sensitive software like these, we should make them open-source so that more eyeballs are looking at it..


Can't help but ask, but as a security pro, what would the consequences be if we just let it burn?


if a public company fails at infosec, and financials or other material nonpublic information is stolen and used to trade, then yea, it's securities fraud.


This is what happens when you delete Hoover Beaver.


Why the SEC? I would expect this to be more the domain of FBI or NSA or any of the more “cyber”-related three letter manifestations of the executive branch.

Either way, I’m glad government agents are going door to door to check on privately owned servers. Maybe they should check everyone’s vaccination status while they’re at it.


"The U.S. Securities and Exchange Commission is a large independent agency of the United States federal government, created in the aftermath of the Wall Street Crash of 1929. The primary purpose of the SEC is to enforce the law against market manipulation."

"In addition to the Securities Exchange Act of 1934, which created it, the SEC enforces the Securities Act of 1933, the Trust Indenture Act of 1939, the Investment Company Act of 1940, the Investment Advisers Act of 1940, the Sarbanes–Oxley Act of 2002, and other statutes. The SEC was created by Section 4 of the Securities Exchange Act of 1934 (now codified as 15 U.S.C. § 78d and commonly referred to as the Exchange Act or the 1934 Act)." [1]

[1] https://en.wikipedia.org/wiki/U.S._Securities_and_Exchange_C...


Because apparently SEC had years ago issued an order that breaches must be disclosed if they may have a material impact on shareholders (e.g. a potential large lawsuit from customers some years afterwards when the full extent becomes known), and it's the job of SEC to ensure that company executives don't hide company problems from the shareholders.

In essence, if you want to keep your dirty laundry private, then you're not allowed to take money from the public stock market, as investors (i.e. everyone if you want to be publicly traded) deserve to know about any major issues with your private servers. SEC doesn't care about how poor your security is as long as the company is open about it, but it absolutely cares if company lies about their (lack of) exposure to its owners.


Because the SEC regulates publicly traded companies.


"Everything is Security Fraud" - Matt Levine (Money Stuff guy)


Good, some of these companies are run by socially connected technical morons who hire a bunch of their college buddies as 'leaders'.

These people need to be exposed.

Years ago, a guy I know was asked by management to spec out an email system that had no limits on the size of file attachments. He asked why and was told that 'leadership will have no limits on their authority... none whatsoever'.

When he produced the quote, leadership was in shock. The price was enormous. They told him they could not afford to spend that much money on a mail system, and he said, "Well, I guess there will have to be limits then."


> 'leadership will have no limits on their authority... none whatsoever'

For some reason I envy whoever got to hear that sentence in real life. It makes it perfectly clear that you're dealing with assclowns.


I have worked for a boss like that but it really wasn't bad. He knew what he wanted, and left us alone to do our jobs. No micromanagement from middle managers, was very nice.


> leadership will have no limits on their authority... none whatsoever

It will be unfortunate for them to hear that disk space places limits on their "authority."


ah, I didn't realize email attachment size was holding me back.


Corporate America should pay more attention. Growing revolt among the proletariat over lockdown and arbitrary technocracy rules. Vaccine mandates will push this even more. Its crazy to think the White House got stormed less than a year ago. Who knows how many dirty secrets The SolarWinds hack has soaked up, and what will be revealed down the line.


As a point of pedantry, the White House did not get stormed, the Capitol building did.


You're right. My mistake. However it was a little a year ago (June) that BLM protestors breached the fence surrounding the White House, and Trump went into a bunker. If the media reports were correct.


It’s always possible to mention things that have little to do with the actual conversation and yet still have an amount of semantic context with things that are statistically insignificant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: