Hacker News new | past | comments | ask | show | jobs | submit login
Github.com unavailable due to a large DDoS attack (status.github.com)
113 points by jpswade on Oct 3, 2013 | hide | past | favorite | 101 comments



The second in two days, github is getting increasingly unreliable - https://status.github.com/messages.

Also, I'd be interested to know how complete lack of service is 'mitigating a DDOS attack' - to me it sounds like 'successful DDOS attack'.


Two hours ago, they were down constantly, but now they seem to be up. Github seems to be getting increasingly reliable!

You can always pick an appropriate window of time, point to it, and say "See, there's a trend!". That doesn't make it so.

Github used to go down much more often than they do now - calling them 'increasing unreliable' really just shows that you have begun depending on it more heavily.


Maybe increasingly unreliable is the wrong term. What about sufficiently unreliable?


Their status page [1] still indicates a 99.85% uptime in the past month, and before this and yesterday's problem, their status page was mostly green across the board for a couple of weeks. It depends on your requirements, really. Nobody can guarantee 100% uptime.

[1] https://status.github.com/graphs/past_month


'Sufficiently' is meaningless without a qualifier. Sufficiently for what purpose?

There is a value proposition involved - you can run your own source code hosting and anything else, but it costs money and time to do it. Especially if you need six nines of uptime.


For a large DDOS attack there aren't any easy ways to only drop the DDOS traffic - especially if it's hard to identify the DDOS traffic in the first place. If they're getting more 'bad' incoming traffic than their connections can handle I don't know how they'd drop that stuff - they have to receive the packets before they can filter them. Maybe their bandwidth provider has tricks for this...


They're using Prolexic by the looks of things.. You'd think a company that specializes in mitigating DDoS attacks would be able to mitigate DDoS attacks. Maybe I'm just misunderstanding the word mitigate


It's not their fault, we were in the middle of provisioning and service validation with them but it wasn't completed. We're had to work through some issues on the fly that we were trying to do non-disruptively, but they're mitigating well for us now that we've got it dialed in.


Fair enough. In that case at least it's not a Friday


Security companies are all snakeoil.

They never deliver from experience. They always bill though.


Infosec in general is a ghetto of navel-gazers.


In response to your comment and the parent comment, from someone in InfoSec: security can never be completed in the way a product can be. It's an ongoing war, and sometimes your opponent gets the upper hand for a while. The problem with being the "good guys" in security is that you have to make sure every hole is closed while still letting the business run. It's easier to be the bad guy, because you just have to find one thing the security team missed.

Security doesn't exist without the business and the business doesn't exist without security, but the business tends to trump security for the sake of features and convenience. It's a very delicate see-saw, and all you can really do is trying to run back and forth from side to side hoping that the other end doesn't hit the ground before you can get over there again.

Attitudes like yours don't help a damn thing.


Actually as a solution architect I have to deal with all sides of the problems: people attacking, audit companies, penetration testing companies and software engineers leaving gaping holes.

The only people who deliver little value are the paid up consultants. When a full penetration and code review misses 4 purposely placed obvious vulnerabilities (by myself) they get told to fuck off. Application firewalls which are circumvented trivially. QoS solutions that don't work.

So far, four well known, well respected companies offering certification and testing have missed the holes and have been fired.

That's the problem: no delivery.

My attitude might be wrong in your eyes but I refuse to employ box tickers which is what the entire white hat side of the industry is about. Canned report, where's my cheque?

No seesaw other than a bent twisted one that sucks up cash in exchange for a half arsed job.


Every defense system has its limitations. There's a truism that if brute force doesn't work, you aren't using enough of it and I think that applies to running a DDOS attack.


Couldn't they use something like cloudflare to have the IP point to local servers? Then the traffic is split on location, with each edge server taking only local requests. That should greatly reduce the incoming traffic, at which point they can try to filter out the 'bad'.


You assume that the attack was limited to their web stack. For all we know it could be the systems that handle the git-over-ssh connections.

I may not be too versed in CloudFlare, but I didn't think they would be able to protect a service like SSH.


Indeed you would not be able to. CloudFlare only does HTTP/HTTPS right now. Technically since Nginx can also support SMTP they should be able to do that as well but it's not implemented currently. Basically if you want to protect SSH it would have to be a provider that does layer-level protection like Prolexic.


Well, cloud flare can certainly forward (or not forward, in the case of bad) ssh traffic. But they would need to dedicate an IP to your account, or provide you with a port number to use.


Getting things through https definitely did not work for some time. I thought I broke my vundle.


Is it feasable to firewall whitelist any IP that has ever pushed to a repo?


You could, but now you're just forcing the attackers to fork an existing project and push some nonsense code to it before they can attack.


Why would you want to cut off all people that use only the web front-end?


There are certainly providers that have networks you can hide behind which can filter+absorb DDoS traffic for you.


That doesn't work in all cases. If you can't distinguish between good and bad traffic or if it isn't specifically targeting an entity it becomes much more difficult to handle.

It also depends on the ingenuity of the DDoS attack, none of which are known to the public so you can't really say anything sensible about it.

If the anti-DDoS mitigating tools they are using aren't working nor is using other services like Prolexic that's usually hint enough that this isn't particularly common or easily filtered out.

You can gripe on all the companies and tools if you will but a good DDoS is quite a bit more complex than 'just filter away the bad crap'.


> you can't really say anything sensible about it

See #4 of the 12 timeless networking truths:

   (4)  Some things in life can never be fully appreciated nor
        understood unless experienced firsthand. Some things in
        networking can never be fully understood by someone who neither
        builds commercial networking equipment nor runs an operational
        network.
-- http://highscalability.com/blog/2013/10/2/rfc-1925-the-twelv...


Would it help if they allowed users to specify some port other than the default, perhaps only as a fallback? I am presuming that it would be easier for them to prioritize traffic to a given range of ports -- in this instance the NON standard ones -- but perhaps that's wrong.

If so, I wonder if they could counter this now by 1) implementing that 2) opening and publishing a not-standard port for login solely for that purpose 3) maybe moving that change-the-default port around if the DDOS shifts on to it.

Even if not-perfect they'd force the attack to spread resources.


Again...

personally I can't see why anyone would want to DDoS github unless they are just being an asshat.


Many would just do it as a challenge, Yes they are being a asshat, but they just don't realise that.


> they just don't realise that

Their parents didn't raise them very well if they don't realize that denying access to an important resource to tons of people is really, really lame.


great advertising for DDoS protection services.


Considering that Prolexic isn't exactly keeping Github up, not really!


I've heard about organised crime doing this sort of thing and asking for payment to stop.


Happened to 'The Million Dollar Homepage'[1].

[1]http://en.wikipedia.org/wiki/The_Million_Dollar_Homepage#DDo...


Agreed... but sites like these are probably targets for a bunch of reasons... http://arstechnica.com/security/2013/02/dev-site-behind-appl...


The problem is, even just being an asshat is enough problem if there are enough of them.

Mozilla has had the same issue: every so often someone tries to DDoS bugzilla.mozilla.org, causing it to get all slow and hard to use. :(


Seeing how many people (ab)use github as a free file/web hosting service for various crap, it is strange we don't see such attacks more often.


My guess is it is accidental.


As in, "oops I typed in the wrong address for my botnet and didn't notice for two hours" ?


I'm surprised there's only one "git is distributed" comment thread so far, but as a reminder: http://rubygems.org/gems/deus_ex may help with getting deploys running during a GitHub outage. Usage instructions are at http://rubydoc.info/gems/deus_ex/0.0.2/frames.


The github issue tracker and wiki is not distributed as far as I know, though.


The wiki is a git repository, so in effect it is distributed.

The issue tracker is not, unfortunately, but neither are most issue trackers.


The solution is to use a bug tracker within the SCM:

http://bugseverywhere.org/ (my personal favorite, but there are 3/4 other options that you can look into).

Not only it does offer distributed bug tracking on the command line (without breaking your workflow), but it implicitly allows to isolate bugs to branches. You can fix a bug in a branch, and a subsequent merge of the changeset will automatically fix the current branch.

I don't understand why these projects are so underrated. In "early git times", distributed bug tracking on top of git was quite a hot subject. They solve many issues nicely.

Github might be a "nifty" viewer, and I do host projects on github for added visibility (by simply using a second push remote), but that's about it. I find "tig" and "bugseverywhere" to complement git nicely and work much better than any web browser could.


Sure, but that's not the point of this gem. This just allows deploys to continue when GH is unavailable.


DDOSing Github reminds me of a study I read about a while ago. It showed that many burglars tend to break into homes close to their own instead of targeting wealthier neighborhoods.

Many of the reasons for that will be very different from this attack on Github as there is no money in attacking Github. But one reason may be similar: Lack of imagination, or in other words stupidity.


Hmm, it's scary how much this affects us.

We can't push latest bugfix to GitHub. Azure cannot deploy it. I cannot run bower_install on the project I would be working on in the meantime.


And here we all are, using "git", a "decentralised" source code management tool.


If the network doesn't permit decentralized networking, then decentralized tools don't matter.

http://blog.zerotier.com/post/58157836374/op-ed-internet-cen...


You can always push to GitHub and your own private repo, then deploy from that private repo.


Well then you have learned not to deploy from github, get an AWS micro instance which you can also push to and deploy from that or many other possibilies that a Dcvs gives you.


Yeah, as interesting as all these new Docker fads are, it's not ideal when you can't update your containers.


On the plus side, I get to email my project manager. Sorry github is down can't do work! "Compiling" :D


Keeping code in sync with your colleagues should not be a major problem, given how you should be able to sync with each other (distributed CVS, etc). Azure... I don't know how that works. Heroku would work without github being online, since you'd push to github - maybe that'd be an idea?

bower_install is annoying, yeah; it should allow for a backup location (s) to resolve dependencies. Maven allows people to configure multiple repositories, which are often mirrored against each other while hosted by vastly different parties; if one repo mirror is offline, there's a dozen others available, in a lot of cases.

For those components, github is a single point of failure.


inb4 "GIT is distributed".

While GIT is distributed, working collaboratively with others still requires a central platform where everybody working on that GIT repo can connect. GitHub is a very convenient central platform.

To GITs credit; you can, with a little server know-how, set up your own git server and give all previous contributors access. However for a small downtime this could be overkill.


Very little server know-how, to be exact. All you need is an ssh account with each team members key added to `.ssh`


you don't need server know how, you can just use another of the available git hosting (e.g. bitbucket)


And even if you don't want to use a dedicate git hosting service, good site hosting platforms (Webfaction) make it incredibly easy to install a git server.


You can even spin up gitlab from turnkeylinux in a few minutes or so wherever you want it.


Git is not an acronym.


Why capitalize git?


Must be my manager.


I agree. I can't install/update plugins for Vim using Vundle.

I heard that Go language can import packages directly from GitHub. So it means that they can't compile?


That's right, it also supports importing from Google Code.


is it possible to add 'mirror' remotes to git so every action takes place on two places in case one goes down?


I've done something like this before with github/bitbucket and it worked as expected. http://stackoverflow.com/questions/849308/pull-push-from-mul...


You probably should reconsider the choice of your tools. I certainly would.


I bet they DDoS GitHub with use of tools obtained from GitHub


I think it's interesting to note how people have gone to expecting 5-digit reliability out of an internet service. Not only is GitHub under fire, but the whole IPSec industry gets blamed.

Back when my dad installed physical PBXes, the big ones that could be the size of a mainframe, uptime the biggest argument: they had to have reliability to five nines (99.99999%, if you don't get it). Then when cellphones first came out, everyone got lackadaisical about dropped calls. And overnight an entire industry worried about reliability "to five nines" changed, and "whatever, it's a new service, you've got to expect some difficulties."

The internet started with relatively low reliability. No web host I've ever seen has truly been able to achieve 99.999% uptime. And yet, when GitHub goes down under a "large DDOS attack" but still manages to maintain 99.85% uptime over the last month (with several DDOS-caused outages) everyone comes out of the woodwork to complain. After all, it isn't as-if hosting a massive service while keeping everything secure and running happily is an easy thing.

If you're tired of GitHub outages, then get a Bitbucket account, or host your own Git repository for backup. What serious developer, or service, would keep all their eggs in one basket if they really depended on the uptime of just one centralized service?


Out of curiosity, to those who prefer Bitbucket to GitHub, how often (if ever) does it (Bitbucket) go down due to DDoS or otherwise?


Bitbucket has had it's own fair share of issues, let alone the "we're pulling Bitbucket offline for 5hrs to move to a new datacenter" debacle not so long ago. I understand why they had to do it but it is indicative of some issues with their architecture.

They probably haven't gone offline through a DDoS yet because they're just not that popular to warrant an attack but I wouldn't bet on it that Bitbucket would fare any better.


It's not fair to compare scheduled downtime with being down for a few random hours because of an attack (or any other reason)


GitHub was also offline for a not inconsiderable period of time fairly recently for a datacentre move.


bitbucket goes down just as much, if not more than github. still love em though.

but they arent entirely honest about downtime.

sometimes they are down, ppl are tweeting it and status page is all green lights.

but still love em.


oh thank fuck, I was getting annoyed that it was me, who was screwing up my homebrew install. For once it's not my fault!


I'm a rookie so how does Github being down for a few hours cause problems? I push to Github a few times a day, but if I don't, I just push to Github the next day.

Are there teams that need to be in constant sync pushing and pulling multiple times an hour?


If you deploy from git, like many of us do, and you're in the middle of pulling down dependencies from github ..

It's happened a couple of times. We moved to our own private server with forks of any dependencies we need.


Anyone dealing with production code should do this, in my opinion. Hope for the best but plan for the worst.


If you depend on any of these:

Issues: hard to fix a bug you can't read about

Pull Requests: code review is a lot less fun without a tool for commenting, without something triggering your build server to verify each commit, etc

Releases: distributing builds to QA or users is all of the sudden more awkward than you're used to

It's not really about pushing and fetching code. :-)


These problems coincided with me trying to get started with Git and GitHub for the first time. I had a perplexing, frustrating day.


Try BitBucket, same stuff productivity-wise plus private repos.


Bitbucket: $10/month for 10 users and Unlimited private repositories.

Github: $200/month for Unlimited users and 125 private repositories.

If you're a team of 10 or less, have a few dozen clients and dozens more supporting libraries in a small company Bitbucket blows Github out of the water.

For the same $200/month Bitbucket also offers unlimited users (again, with unlimited private repositories).

I wouldn't call Github's pricing unreasonable. But I have learned to appreciate Bitbucket's service (they're really on top of things on their Twitter feed) and their pricing is lunch money for a day (as opposed to skipping lunches for a month).

Highly recommended.


We switched from Bitbucket to Github because of Bitbucket's reliability issues. YMMV.


I've seen a lot of stuff in their Twitter feed they seem to work through, but I've never actually run into any issues. So I've interpreted that as transparency I guess.

Been with 'em for maybe a year now? Never had a failed push or pull. That's happened a number of times with Github but I wouldn't suggest it's been damaging to the business. Only a minor inconvenience at times.

So with your anecdote and my anecdote, we get to call this "data" now right? :-)


I'd be interested to hear what issues you had with Bitbucket - we just started using them (switched from Github) and haven't had any issues just yet... but I'd be interested to know what to be wary of!


Is there a git tool which syncs remotes? I could set up a second remote for the times github is down, but how do i share it with my team members? Does everybody need to add it manually? That could become tedious with larger teams or more remotes.

Plus: There are a bunch of decentralized issue trackers, can any of them sync with github? Is that possible with their api?


> Plus: There are a bunch of decentralized issue trackers, can any of them sync with github? Is that possible with their api?

Last time I looked, their issues are not stored in git itself. This is something that has kept me from using their issue tracker for my projects as it encourages lockin.


The great thing about GitHub is that it's still Git. If GitHub is down, that just means that your central publishing site is down. It doesn't mean that you're developers can't work. They can still share amongst themselves. Like they probably should be doing even when GitHub is up.


How would this work? (This is a serious question.)

Would each of us set up each other's internal IP addresses (192.168.0.101, etc) as remote repositories? Would each of us run a git repository on our own boxes? Or would we set one up on our own AWS box or something?


Yep, you set up each developers machine as a remote, but you pull from them (rather than push). Helps to pull to a separate branch though.


Well, ideally you'd have DNS setup internally so you don't have to use raw IP addresses, but essentially yes, you map each of the people with which you're collaborating as remote repositories. Because they are remote repositories, all on their own. There is nothing special between your repository on your machine and the repository on github.


Setup internal dev box, push code there and from there to github...


They are back up for me now.


... although trying to download a zip of a repo is giving me a 500 error.


me too


If online code availability is _that_ important for you then just push code to both GitHub and BitBucket.


Since the VCS are distributed, you could always mirror a repository on a NAS at home or something.

Edit: I might have misread the parent's comment. If CmonDev was referring to public availability, just a local repo won't do. It depends on who needs access to the code etc


maybe the fact they have to deal with non http incoming traffic makes them an easier target?


Can anyone explain why Github is a DDoS target?

Seems a bit pointless


Extortion


Testing resources is one.


Quick blame it on Ruby.


But hurt anyone? lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: