Hacker News new | past | comments | ask | show | jobs | submit login
Here’s why we keep getting hacked – clear and present Billabong failures (troyhunt.com)
155 points by troyhunt on July 16, 2012 | hide | past | favorite | 47 comments



Most of the companies I've worked with had absolutely no security considerations. They've expected it to work out of the box, with, say, RoR and Apache.

Needless to say, it didn't work. Also to be noted, there were no noticed service disruptions caused by any 3rd party. We were small fish.

Which leads me to the following - security is a scaling issue. Sure, your prototype even though ready for going live, need not really be secured in most use cases. But once you start to grow, especially rapidly, things can easily spin out of control.

You need some time to get a proper certificate, your database looks like development version on your local machine, you've handed out few ssh keys via e-mail and it's on few USB sticks lying around for root admin on deployment server (since someone has to take care of it while you sleep), you login on public networks to get company resources you need asap, etc.

What I'm really interested is - how does one scale it? What are the priorities?


I'm not sure if it's unconventional or not but I'm a big believer in doing as much as possible correctly from the start i.e. having a dev -> staging -> production setup, clearly documenting everything, devising and sticking to a set of procedures and guidelines (for deployment, security, coding style etc.), not taking "bah it will work for now" shortcuts with fundamental design choices, using a proper bug tracker. And so on.

This adds a bit more up front planning pain to any project and also slightly slows down the daily dev routine BUT it pays dividends further down the line when you suddenly start getting noticed and your usage figures begin multiplying every 24 hours. In that situation there isn't time to re-work production dbms schemas, change the app code to deal with bcrypt'd password hashes, change the deployment script to make it more secure so you don't end up accidentally dropping a dev version of the site on your public servers etc. etc.

Better to pay the up front cost of getting the basics correct at the beginning of the project rather than dealing with an un-contained explosion later on just as you hit the limelight.

Also, the up-front pain of doing this diminishes with every additional project you do as a lot of the concepts, approaches and procedures are re-usable.


The idea of doing a little work up front to save yourself trouble later on is good in theory, but the reality of the early stage startup world is if you don't move fast enough you won't be around to experience that trouble later on. Sometimes it makes sense, sometimes it doesn't. The trick is to find a happy medium that works for your situation.

Totally agree on that last part. The pain rapidly diminishes as you come up with simpler ways to do things in a reusable way. Also, the tools to do it are getting better and better as more and more people are feeling the same pain.


Not only that, but it reduces the number of pivots you're able to try as a startup that is seeking product-market fit.


There's a massive difference between having procedure up-front (staging, etc.) and just writing correct code.

For instance, properly escaping output, parameterized queries, password hashing -- all those don't require more work (or if they do, it's so, so minimal). I mean, jeez, you're writing the code, just do it right.

That's a far cry from setting up multiple servers, certificates, etc. and having a real operations process. I can forgive, say, not having automatic builds. But there's no excuse for having a SQL injection in 2012.


You're right and I agree. I just meant one may as well try and do it all properly from the outset since it really isn't all that hard/inconvenient and imho the pay-off is worth it. If you want to build a secure system part of that is putting some planning effort in to your architecture and dev process before you breakout your text editor and start coding like a maniac. If you're a one man band, it doesn't have to be too OTT. Make the dev and staging boxes virtual machines on your desktop, create a *.mytestdomain.internal self signed cert that works on a dummy internal domain, use Trello as a bug tracker, keep a list of who has access to what and why on Goodle docs, subscribe to a few key security and vendor mailing lists, write a simple bash or Fabric script to deploy your site, keep sensitive project related login codes in a truecrypt container on a USB pen drive etc. Simple stuff but it all helps.


That may work for clearly defined product and very loose development timeline, but what when you need to make a social network based app with deadline 3 days from order, which may well be spiraling virally the day of the launch? Your security is then clearly post scriptum.

Disclaiming though, I clearly state to such clients that security will not be really up to challenge on those timelines. They usually don't care, even to a point that I don't really bother anyone with it, since I get blank stares when I discuss security with them.


> I get blank stares when I discuss security with them.

I can assure you that the blank stares go away completely when there is a breach. You will probably learn by then that your customers just expect you to handle it professionally. They might not be able to think beyond their deadline and their first business goal, but leaking passwords or losing user data will be your fault eventually.

I will not pretend that all my past work is bulletproof, but it should be not so difficult to make sure your tools and code handle all incoming data like hot grenades. You might skip some big-upfront-user-import in the beginning as you control the data chain, but $_POST or params[:model] or what else is flying in your face should be met with a standard treatment.

Think of it this way: right now you do not want to have your CV say "2012: developed user login module for LinkedIn. Finished within the deadline"


I suppose some apps will be different but a lot share the same basic security concerns. So just figure it out once and apply the best practices you've come up with to all future projects as a matter of course. It takes time to initially figure it all out but after that there's not too much extra effort required to code securely VS just smashing stuff together to meet a deadline.


"Security is a scaling issue" is a dangerous assumption to make.

While a small site might not be targeted directly, you might still get hit by an automatic attack if you use any off the shelf software. It's not fun when Google marks your site as harmful because your OpenX-Ad-Server suddenly serves malicious ads.


Ok, so then first priority is to get secured from "automatic attacks on off-the-shelf software".


"If you had a dog when classic ASP was replaced, it’s probably no longer with you."

Well written article that strikes the right tone.


Unless you have a dauschund. Those little guys can last 15-20 years if you treat them right.


Yeah, but statistically even the loyal sausage dog would have a better than average chance of having moved on since the ASP launch! (10 years ago avg age would have been < 10 years)


Fun fact: Dachshund actually translates to "badger dog." They're the perfect shape to go digging around in holes in the ground hunting badgers, rabbits, and the like.


just got myself one, he's not even a year old yet. Definitely prepared for the idea of having him around even when in my 40's though.


Be careful, they grow on you. I have 3.


Genetics play a huge role in a doxen's lifespan.


*Dachshund


I have a problem with the constant "blaming the tools" in this article. Blindly relying on the framework is one of the major causes of security issues. Exploits will always be one step ahead of the frameworks, blaming the tools breeds the wrong king of mindset when it comes to security.


fully agree. A lot of microsoft fanboyism. The sql injection seemed serious.


Actually, I think it's very clear there were just a lot of bad dev practices happening across the different technologies. Classic ASP on Impact Data, PHP on the main Billabong site and current day .NET on the store. Clearly bad things were done on all.

However, some of those basic injection flaws in classic ASP would not have been possible on a more modern technology stack be that from Microsoft or other. A 5 year old version of PHP also isn't going to do you any favours. Newer frameworks are simply better at saving you from yourself - but that doesn't excuse the sloppy practices.


The guy's a Microsoft MVP. It was more a Microsoft oriented article in my opinion.


This post is irrelevant, the author can reduce it to: https://billabong.com:8443/

(Plesk.)


For those who don't know, Plesk has been a recent vector to compromise a large number of websites: http://krebsonsecurity.com/2012/07/plesk-0day-for-sale-as-th...


Apologies! Should have expanded on that a bit. Overall gist: billabong.com has plesk publicly available for login. Plesk allows root (full system access) logins from a remote source and has all kinds of exploits available for purchase and abuse.

This kind of stuff happens, but in essence Billabong's sysadmin needs to start surfing exploit mailing lists more than he's surfing other places :)


This is actually a decent article but he spends far too long going on about HTTPs. Yes it is insecure but it isn't why they had a massive password compromise.

Ditto with XSS.


Quite right, and I did refer to that, the point was that if you can't get simple things like these right (among others referred to), is it any surprise that a major breach occurs?


I discovered a web site with XSS vulnerability. I sent them an email a year ago about this security problem. Nothing has changed yet.

What should I do now?

Last time I pointed them out to some wikipedia articles relating to their vulnerabilities.


It depends on a number of things. If you've documented your communications with them and have repeatedly tried to get in touch you may feel like disclosing publicly. A year is more than enough time to fix a XSS issue, and nobody would really judge you for going public with it.

However, this might depend on where you live. Some countries (like the UK, where I'm typing this from) make testing website for vulnerabilities illegal, no matter how serious the issue or good the intentions[1]. Very few people are actually caught by these laws, but there is always a risk that you piss off a litigious company, who then go after you.

[1]:http://jeremiahgrossman.blogspot.co.uk/2006/09/is-testing-fo...


I encountered a problem with one of our vendor's login pages. I found that sending them a link that added a giant "YOU HAVE A PROBLEM HERE" graphic to the page and popped up an alert containing your password when you hit submit got the point across better than trying to explain it.

I probably wouldn't have done that if I didn't already have a relationship with the vendor, though. I don't want to be accused of extortion or cyberterrorism.


http://serverfault.com/questions/277843/security-flaw-report...

I would say the biggest concern is that you could become a target. Say that you, in good nature, inform them that you can buy items for free due to a injection attack. 4 days later someone else buys $10,000 worth of gear using the same exploit. They now only have one suspect: You.


Write a blog post explaining exactly what is wrong and why it is a problem, then tweet it to them and post it to their Facebook wall. If they won't fix the security issues, then maybe you can save a fraction of their users by convincing them to go elsewhere. (And by making noise publicly they are probably more likely to actually do something about it)


When you do this, at least at first, don't name the site. In your email to the site you can tell them the blog is about them.

If you do feel the need to spill who is at fault, you can do it in the comments or in a follow-up post at a later date.


Yeah its almost your civic duty to warn potential customers. But you have to be careful at the same time not to attract more attention to it than necessary.


I'm not 100% on the rules of responsible disclosure, but isn't giving a company more than a year to fix an incredibly basic error more than enough time? The longer you wait the higher the chances a black hat will come along, why should their customers burn due to the company's apathy?


Agreed. At that point I'd post it on an anonymous blog through a proxy just to protect yourself in the case they want to be assholes.


Well, thank you all for your advices. It's been an interesting discussion.

As it is only a small shop, I think I will email them again, but this time with a link that point to a more verbose version of the vulnerability, as someone mention.


While people expect/desire that form to be served via SSL, what matters is the URI to which the form submits, and whether it submits over SSL.

You could apparently serve the author a form over SSL, have it post to a malicious server, and he'd be none the wiser because he's focused on whether the empty form was sent over an encrypted socket.


Every link in the chain should be over TLS. Otherwise, you can change the unencrypted part to point somewhere else. Both the form, the site linking to it, and the URI it submits to should be over TLS.


I can use LWP, Curl, Wget, etc. to submit a form over https to any HTTP server.

I am unaware of any protocol semantics that allow an HTTP server to determine how the submitted data was marshaled.


Eh? I think you misunderstand me.

As Facebook learned, submitting to an HTTPS server isn't enough, the form must be too. Otherwise you can be man-in-the-middle attacked on the form page. Better yet, serve everything over HTTPS, so people can't change the links.


So what what you mean to say is that if you don't use SSL all the time, somebody with a sniffer can pull you session ID out of the air and impersonate you by hijacking your session.

That's VERY different for a man-in-the-middle attack.

Do you think the coffee shop should have offered encrypted wifi?


Google 'ssl man in the middle attacks' and you'll see that SSL does not prevent man-in-the-middle attacks.


A good article on security and XSS, but the conclusion is a little disappointing. The headline makes it seem like the author knew why Billabong got hacked, so the whole time I was reading, I was waiting for the big reveal. But then when it comes down to it, his conclusion is, "I don't know, there's a long list of things wrong with the site, but it was probably SQL injection." Felt kind of like watching that M Night Shyamalan movie with the aliens.


Nonsense. The author demonstrates a range of obvious vulnerabilities in the site. The fact of which of these happened to be exploited this time is pretty irrelevant when the website as a whole is so poorly implemented.


Exactly...the smorgasbord of security failures was the great point of the OP. If the hack actually occurred because an employee decided to go OfficeSpace on Billabong, that would not invalidate the value of the OP in the least.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: