Hacker News new | past | comments | ask | show | jobs | submit login

That's pretty devastating to anybody who gave up their data to support things they enjoy. I would really like to see services getting hit with massive fines so they actually "take security very seriously" before they get owned. It's far too late to care about it now, there's a lot of compromising data in that leak.



How do you distinguish someone who was lax with their security from someone who actually takes it seriously and still got hacked?


You can't :)

There is a huge Market for Lemons (https://en.wikipedia.org/wiki/The_Market_for_Lemons) style scenario in IT systems with relation to security.

Everyone will say "we take security seriously", but there's no way for ordinary consumers (or indeed most companies) to determine what the company meant by their statement, and to evaluate the relative security of the systems of two companies.

This could actually provides a market incentive for companies not to spend too much on security, as those that do will have lower profits than those that don't.. until they get breached, and even then companies with good security can be breached...


I don't think it's actually that bad; for example, Patreon said that they don't store CCs and that they correctly hashed passwords, and as a consumer myself, I did take that into account. Obviously less knowledgeable consumers don't know what "bcrypt" is, but that's true of any product - you can't judge what you don't know how to judge.


so with two patreon style sites both of which say "we take security seriously" , until they have a breach, how would you judge which one would take better care of your data?

The information (AFAIK) about their security mechanisms only got released as a result of the breach, so even assuming you knew what the terms were and how to judge good security from bad, you wouldn't have the information until the site got compromised.

This is a very common problem, some more examples here http://raesene.github.io/blog/2014/06/08/finding-security/


The information that they didn't store CC cards was available in FAQs previously: https://patreon.zendesk.com/hc/en-us/articles/203913779-Do-y...

The password hashing algorithm wasn't, but then again an informed consumer uses unique passwords for each site, so that's less relevant.


Many informed customers - perhaps most - use the same passwords on many sites because it's too hard to remember hundreds of passwords.


I don't think they're particularly informed if they aren't aware of password managers.


I'm aware, I just can't be bothered. Every time I create an account I ask myself "do I care if this gets compromised?". If the answer is no, then it gets a standard password.


As long as you understand that when that site gets compromised, all other sites where you use your standard password get compromised for you as well. Collectively, all those sites getting compromised for you may be enough of a reason to consider password managers.


They may be informed and "aware" that password managers are a pain to use compared to using the same password for every site.


You can't judge which is better after a single breach either.


I love that phrase, it so aptly describes lots of markets (e.g. data vis/big data systems). Thanks for the link!


Maybe the answer is some sort of audit and certifiaction


In the case of Patreon, you might at least assume some negligence, as they had their development servers accessible from the public using production data. In other cases it might be harder to tell. Shielding your dev and staging servers and using mock user data is pretty standard IMO. I have already requested my Patreon account to be deleted, as they clearly don't understand how to protect customer data.


Also, someone who was quite likely the guy that hacked them (he posted this data dump a day ago) was claiming on Twitter that they'd left a root console on that machine open and unprotected to the entire internet. Of course, he's a troll so he may well have been lying...


I guess we could argue that if you get security seriously enough you don't get hacked.

EDIT: I was being sarcastic.

This is a good read: https://www.schneier.com/essays/archives/2000/04/the_process...


No actually we don't.

Security, if taken seriously is a set of policies related to software and hardware (in the post-Snowden era).

Applying patches like grsecurity, running services through chroot jails, installing IDS systems and reporting tools makes a system (every system) extremely inflexible. Updates become nightmare.

Tuning a system to avoid false positives might take a forever and then the topology/setup/clients/users change. Back to square one again!

New tech like docker or any kind of virtualization are out of question. You probably don't want PHP, Python or Ruby applications. Do you really need JS. Can you debug all the libraries the developers used? Is their responsibility or no, to audit every new shiny framework for security?! Are they going to pay for it? Is it worth it?

The personnel, even the managers, must go through a lot when the systems are secure. More often then than not, people consider standard security measures exaggerations, so they more often than not try to avoid all the hassle. Can you cope with these people, time and again?

And sometimes even taking extreme measures might not be enough[1]. We keep adding layers upon layers at all levels: Development, Sys-admin (now Devops), etc. Can audit every piece of software that comes into the stack? Of course not.

You either have a security team that breaks everyone's balls with audits and strict policies that will cost you money and most startups or even corps can not afford the kind of inflexibility and cost that this brings in.

Security is a trade-off and truth is that most of the times is just additional cost and complexity and even when you're ready to make that trade-off, you're still not invulnerable by any means.

[1] http://www.cnet.com/news/report-of-fbi-back-door-roils-openb...


Agree completely. To take it a step further - even if you assume that you have all of the above locked down and have inspected all of your servers - someone still has to connect to those servers.

Let's assume they are connecting using an iOS device or OSX on a Mac to run commands on the server. As everyone knows, Apple devices allow full root access from the cloud for Apple to install new applications etc. These same backdoors can often be exploited [1] or leveraged by governments. If we assume an attacker has full control of the laptop the devops engineer is using to secure the system it becomes trivial for him to insert a backdoor in the servers too.

[1] https://truesecdev.wordpress.com/2015/04/09/hidden-backdoor-...


This is the funniest thing I read today on HN, congratulations


This is what things like PCI compliance are for. Admittedly PCI is a crappy ticklist, but it enables you to distinguish between organisations that at least try to check the boxes and those that don't.


The same way you do it for medical negligence: courts.


I'd really rather not see web developers have to carry malpractice insurance and be licensed by state boards.


I really think web^W developers should up their game if they want to avoid this fate.


On the bright side, maybe we'd get to append "W.D." to our signatures.


The law makes those kinds of distinctions all the time. For example, when someone dies at work the first thing the government tries to figure out is whether it was just bad luck or the company was being sloppy.


Then they didn't do it seriously enough? Either they are not hacked or they can find the person responsible (rogue employee, etc..).


Fines? It would do more damage than good(if there's any good that is). One of the best things about the information age is the ability for anyone to take part in it. You could be selling glow sticks to a guy a thousand miles away from you.

If there were fines, it would scare away people with less technical skills who would want to start something new.

What we must do is introduce certifications, this would help make companies more security aware, but wont make it mandatory.


If you're an inexperienced startup then perhaps you shouldn't be taking risks with my data.

If I bought glowsticks at the mall and they injured me when I used them I would expect consumer protection laws to issue fines to make selling them too risky a venture, why shouldn't I be protected by a similar mechanism for injury caused by leaky data?


omg, stop being such babies. You can't run a startup without risk, I am sure you have been around the startup world to notice how much risk these startups take in order to be successful, as a society we should encourage people to take more risks, and help them, not become a barrier to their success.


I don't want to encourage anyone to take more risks with my life, thank you very much


I'm not sure how compromising the data is... if you're referring to the display of what people support, that's public information. For instance here's my user profile. https://www.patreon.com/user?u=632496&pat=1 you can find it with one targeted google search


You can make that page private in the settings, which I have.


>massive fines so they actually "take security very seriously"

The bad publicity around such a hack can very much turn a company belly up. Massive fines will not alter the incentive structure significantly, they already know a hack is a bad thing.

On the other hand, there are plenty of interventions that would change the security incentives. For example, decriminalize white hat hacking to allow access to any internet connected system, as long as the vulnerability is reported within 7 days to the relevant bodies and no data is duplicated/altered. Heck, award prizes for it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: