Hacker News new | past | comments | ask | show | jobs | submit login
GitHub: Increased Error Rates (githubstatus.com)
75 points by wst_ on Feb 19, 2020 | hide | past | favorite | 52 comments



The advantage of a distributed version control system is, of course, that you can keep working even if GitHub is down.

And everyone has a copy of the code, so even if GitHub never came back up you'd be able to keep on working, once you'd agreed a new central server to use for syncing purposes.


In my experience it isn't the syncing of the commits that causes trouble. It is all the integrations you layer on top that cause trouble. You've got build systems that trigger whenever a branch is merged. Code review systems and ticketing systems do stuff whenever a commit shows up. When those stop working, all the commits in the world don't matter much because you can't get your code out to the next environment or the next step of your development workflow.

In short, Git allows you to make commits in isolation. Thats great if the only work you have can be done in isolation. But in my experience, eventually you'll need to kick off a build, make a deployment, start a code review, or something... and then you are hosed.


While these problems are valid and a shortcoming of distributed version control in general, I believe this is a good illustration of how we should not be using Git in the future. Git can be used as a deployment mechanism for sure, but it's not a very good one when comparing to other means of publishing and deployment, such as Docker and Kubernetes.

In my opinion, Git is best used as a system of record for changes to a codebase, and while deployment _can_ be triggered from events happening on Git (or a centralized repository such as GitHub), that should definitely not be the only way to deploy something. You should always be able to deploy manually.


GitHub isn't git.

Code reviews, pull requests, actions (CI), are all value that GitHub adds over git.

Just because the VCS stays up doesn't mean github going down doesn't impact work.


The company I used to work at used GitHub just for git hosting. Code reviews, pull requests, CI, were all handled by other tools. There are probably a lot of others in that situation.


No one does this. There is a huge infrastructure overhead of doing what you are proposing. If I were to guess, not even 1% of the companies are separating git & its contextual data (CI, PRs, code reviews, etc).


I'm not proposing this. The last place I worked for did this. There are plenty of third-party tools that do these things with good GitHub integrations.


If they are integrated to GitHub instead of git, they are probably down as well when GitHub is down.


The integrations were mostly there to notify the tools about updates and ensure they have the latest versions of every branch. You need this for things like triggering builds on branch updates, but not for things like uploading a diff to code review.


Can't wait for someone on here to say: "This is why you should run your own gitlab instance".


The best part of running your own in-house gitlab instance is it is almost guaranteed to have substantially less monitoring than GitHub. Meaning nobody in the company will ever be able to prove it has lower availability than GitHub even though it almost certainly does.


The best part of running your own in-house gitlab instance is fiddling too often with the configuration after updates because something broke again. The gitlab ci runners with docker:dind are a constant pain in the ass.


GitLab employee here, specifically on the Runner team! What kind of problems are you facing with `docker:dind`?

We had a recent problem where jobs were hanging due an update in Docker itself, where things halt when running on single core machines as explained in https://gitlab.com/gitlab-org/gitlab-runner/issues/6697. It's being fixed upstream in https://github.com/moby/moby/issues/40514. In the meantime to prevent you from having problems like this, it's always better to use a specific version of Docker for example `docker:19.03.5-dind`. Like this, you have control over which dind version you use and can upgrade when you have tested it beforehand.

In retrospect, we could have caught this issue earlier and informed our users if we had automated pipelines pulling latest images and testing them out, which we started working on yesterday https://gitlab.com/groups/gitlab-org/-/epics/2674 and we started monitoring dind specifically in https://gitlab.com/gitlab-org/ci-cd/tests/dind-image-tests/


My experience hasn't been anything like this. Countless upgrades, rarely a change to configuration. Any changes that did happen were communicated.

Nor with gitlab-runner (and I even fork it to add stuff), and use docker-in-docker.


On prem software has its place. Whether GitLab or some other alternative.


Nah.

This is why you should run your own gogs/gitea instance.


I've been using gogs from extremely early on and switched to Gitea after the fork. The quality of the software is extremely high and it's quite a breeze to self-host. Highly recommend.


Gitea is amazing. It's exactly what I want and need for a Git server, though I know without CI or 'gists' it's not for everyone. But previously I just had a git server on a barebones Linux server (SSH/cli only). Gitea gave me a decent web front end and easy hooks for mirroring. Overhead for the server isn't THAT much more than bare git. I love gitea!


And Phabricator, the most underrated software in history.


I'd love to use a self-hosted VCS (git or svn) that supports directory-level ACLs && painless code review workflow. Can this be achieved with Phabricator?


I noodled around with Phabricator for a while and it's slick in many ways but extremely clunky in others. For instance, importing a repo that exists in another VCS is done at creation time in Gitlab as part of the flow - it's not a completely different step.

In Phabricator, you create the repo, you turn off its access URLs, add URLs for the repo you're mirroring, set them as a mirror source, and then turn the access URLs back on. This is about a 5-10 minute process per repo and it's all clicking around in a UI.


Have you looked at the rhodecode.com for svn/git?


really mixed feelings with phabricator. it's cool to have the concept of review instead of sending each comment immediately, but the UI still sucks. It's hard to understand which repo the pull request/review we are looking at belongs to for instance.

and it gets all f*cked up if you started a comment and didn't close the input. among many other UI stuff (not sure if the UI has improved or if any of it is configurable btw)


GitHub also has batched reviews by the way.


:) arc is mostly pain but the rest is nice. And the project review stuff is genius.


Or Fossil, much simpler.

Or Pijul, still experimental, but where cherry-picking isn't fundamentally broken and which actually directly works with line patches.


Ah yes, on-prem software: where instead of having an outage and someone has to go fix it, you have an outage and _you_ have to go fix it.


I've never had an outage on my own system that didn't take two seconds to fix because some service just needed restarting.

When you don't have to cater to more than 1 user, there are VASTLY fewer things that can go wrong in a way that takes a long time to fix.


Consider yourself lucky that you've never had a hardware issue.


If you're not prepared to deal with that, use a VPS host. That's what I do.


A curse and a blessing. You can fix it as fast as you want. For github, everyone will stop pushing/merging and you will have to wait for them to fix problems you will never encounter.


GitHub/Microsoft’s status as a defense contractor and vendor to NSA/DoD (MS) and ICE (MS and GH) is why you should self-host your repositories, not their uptime levels.

https://www.vice.com/en_us/article/evjwwp/as-githubs-confere...

https://twitter.com/search?q=%23githubdropice

https://twitter.com/githubbers

Several of their staff have resigned over it, and speakers have pulled out of their conference. Their CEO defended the decision to work with the organization that puts young children in concentration camps.

That’s more than enough for me to delete my repos there. Gitea is an excellent and easy-to-deploy replacement that integrates well with Drone and supports u2f.


1) Most people don't care enough about that to even be bothered by it, never mind change their entire tooling and move large amounts of code out of protest.

2) You are the exact same, it seems, if not worse on account of your hypocrisy? Your GitHub address is still listed in your HN bio, you've been active as recently as yesterday, and you're a pro user, so you _literally_ give GitHub money.

If these issues actually matter to you, delete your account. I'm sure someone would love to scoop up the username "sneak". But don't come here on your high horse shouting about ethics while you actively financially support the organization you're lambasting.


My account is comped, and has been since long before the acquisition. I'd forgotten about the link in my bio, as I put it there a long time ago. It's gone now, thank you for reminding me. :)

I am migrating my repositories off of GitHub this week, which is why I now only have about six remaining there instead of the 60+ that I had for many years. The remaining ones are the ones that need to remain online for services that pull from there; I intend to remove my remaining code from the site very soon, on the order of days. I actually happen to be building my new self-hosting server today, having tested out Gitea and found it a perfect replacement.

My account will remain, to squat my username to prevent impersonation, with a single public repo containing only a README explaining the situation and why use of GitHub is inappropriate.


ha! I was thinking of installing in my personal k8s cluster for the hell of it.



Honestly it just seems mostly coincidental to me. Reddit has downtimes every other day or every few days, and twitter only slightly less. The only unique downtime here is github, and even then it has been a bit since their last down time.


DataDog also had downtime today


I guess it's a bad time to make a presentation to let us move from an on-prem SVN repo to GitHub...


That depends, do the on-prem SVN have better uptime than Github?


> I guess it's a bad time to make a presentation to let us move from an on-prem SVN repo to GitHub...

On-prem Git with mirrors. (with Googs, Gitea, GitLab as a GUI...)

Developers should also keep local clones and so the CI, etc.


Wouldn't like to be in shoes of GitHub engineers right now. Must be a lot of pressure. I am wondering how does the outage processes look like in a company of this scale.


Sentry got degraded performance too https://status.sentry.io/


Does HN traffic spike when github goes down?


I cam here to see if I was not the only one to have issues so I guess yes...


Same, HN is my goto source when github is down:P We all rumble about how centralized git is bad, then we all go back to it when github is back up again.


It seems to be up again!

Back to find my answers on GitHub :-)


I consistently get a 500 error when I try to logout, the main page loads fine for me though.


i cant create a PR, maybe they are in read-only mode


I haven't been able to create a new repository all morning


Seems like push, pull works. At least for me. Web page, indeed, doesn't - can't create PR, can't review code, comment, merge, etc.


What happened? M$




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: