Hacker News new | past | comments | ask | show | jobs | submit login

You don't like the term "IT managers"? Let's call them "company managers", suits better?

You as a company allowed and external entity to push a completely uncontrolled update on all your production envs. What if crowdstrike had been hackered and instead of a BSOD you get a cryptolocker?

If you don't realize how crazy is this, then I having nothing to add.




I am sorry but I don't really understand what you are arguing for.

I can see what you are arguing against: it's the unchecked autoupdate policy for updating critical software. The problem here is that almost nobody does that anymore, specially because of the overhead this caused across the industry. To replace the overhead, there are contracts in place that if a supplier messes up they will be held financially accountable. It's called SLA.

Now, as for virus protection: AFAIK nobody ever gated AV updates. OS updates, yes, OS upgrades, even more so. But AV? Not to my knowledge.

What you seem to be arguing for is unrealistic. Consider a 0-day exploit, being frantically pushed by AV vendors to fight against, but the IT fails to gate the update in time. Time and time again the autoupdate saved our collective a*es.

The IT managers are definitely held accountable, so they will definitely insist on NOT gating updates.

The Crowdstrike should be held accountable for this fiasco and not the individuals, that manage each and every company's IT infrastructure. If the said company survives this, that is then a failure of our companies' leadership.


AV is still software, special kind of software but still software.

If you look at how "normal" software updates are handled around the world, you will see a recurring pattern: - updates are first done in test envs and then in production. - large production envs are updated in "waves". - critical updates may go directly into production, but when it happens they need extra authorization and awareness

Please tell me why this pattern cannot be applied to AV updates?

> Now, as for virus protection: AFAIK nobody ever gated AV updates.

What happened today tell us that it is a bad practice, and btw I'm aware of some customer of mine that didn't incur any issue in production because they first updated the test env and spotted the problem.

> The Crowdstrike should be held accountable for this fiasco and not the individuals

We agree that crowdstrike should be held accountable, but they are not the only one.

What happened today tell us that there is a big hole to be plugged. And it can be fixed only but single companies. Crowdstrike, if survives, can improve it's QA process, but who will guarantee you that it won't happen again? What about other vendors? You should always assume that everything can fail and adopt processes that help prevent and mitigate these failures.

Again, think just for a minute, what would be the consequences of the today fiasco if instead of a bad file they would have push a cryptolocker or troian? Solarwinds tells us that this is not a hypothetical scenario but a real risk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: