Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Where do you deploy to in 2018 and how?
77 points by Narzerus on Feb 27, 2018 | hide | past | favorite | 83 comments
I've been using Heroku for several years now, and as it happens to everyone eventually it becomes just too expensive.

I'm curious to know what people here use to deploy, and where are you hosting your apps.

Personally I'm looking for an experience as similar as possible to Heroku, any recommendations?




Most of my projects are Rails-based, and I tend to use DO [0], and have just discovered Hatchbox [1] to deploy. Super easy for side-projects to get started and deployed. Taken deploying side projects from days to minutes.

[0] https://www.digitalocean.com/

[1] https://www.hatchbox.io/


Creator of Hatchbox.io here! I was surprised to see this on here, which is awesome. Let me know if you have any questions. Happy to help!


I second DigitalOcean. Great UI, almost perfect API [0], excellent support.

[0] https://digitalocean.uservoice.com/forums/136585-digitalocea...


https://www.nanobox.io is also a great option for Rails... or many other languages & frameworks.

Contained developer environments, simple command line deploys to VPS, and a free tier that can handle most hobbyist application needs.

I deploy Rails & Elixir / Phoenix apps with Nanobox


I also use Hatchbox. It's a pretty good (and cheaper) alternative to Heroku, and you can own the VPS with your provider of choice!


I do things in an old-school way. 2 Bare metal servers at different data centers with totaluptime.com acting as a load balancer with auto-failover.

Deployment is done via SFTP by pressing the publish button in Visual Studio. This will deploy to the inactive server. I then manually trigger tests on GhostInspector (this could be automated via API) to make sure I didn't break anything. Then I run a custom script to make the load balancer redirect traffic to the upgraded server.

Solo founder, small bootstrapped business generating 50k/month with 1000 paying customers. Hosting costs are under $500. I could double the number of clients without needing to upgrade the hardware. I looked into moving to AWS or Azure, but can't justify paying 4x more for the same performance.


Thanks for sharing, I'm considering bare metal as well for a project. Is latency between the two data centers an issue for you, e.g. is one of your two servers running a SQL database as master?


Right, I replicate the database. Latency hasn't been a problem for our volume. During our peak hours we get 30 requests/second, so it's pretty manageable.


As someone who like down-to-earth first approach, I find it strange that reading about doing old-school way sounds, actually, refreshing. It would be interesting for anybody who builds starting from small blocks, do you have some blog about ongoing stuff?


This is probably a very dumb question, but what exactly do you mean when you say you run bare metal servers at a data center?

I run a kind of similar setup sans Visual Studio, so I'm very interested in understanding your setup a little better.


Bare metal means I'm not running on VMs, but on dedicated servers, like the ones you can find on OVH and many others.


Kubernetes, both in-house and on AWS. I couldn't live without it anymore, it's just so nice to use and easy once you've gotten over the initial learning curve.

kubectl apply -f for simple deployments, Helm for more complex ones.


What services on AWS do you use with Kubernetes? Any good resources re: Kube and AWS?


Google Cloud Platform - their PaaS or serverless solutions are extremely cheap. Also having 1TB free query on Bigquery is a true hidden gem in cloud. The same instance compared to other cloud vendors have better performance. We always needed fewer workers on GCP compared to AWS for example.


I also prefer GCP over AWS. GCP has better documentation, better UI and UX, better console and just overall more friendly.


Easy offboarding from Heroku into dokku [1] and digital ocean [2].

But these days I run simple docker in digital ocean and Azure with deployments managed via my CI.

[1] - http://dokku.viewdocs.io/dokku/

[2] - https://www.digitalocean.com/products/one-click-apps/dokku/


I'm surprised how many people here are snow-flaking their own web application servers instead of just using Heroku. Heroku is $25/dyno/month. At $100/hr for a dev a Heroku dyno costs 15minutes of dev time per month. If you spend just a day tinkering with your application server VMs or updating docker images running your Rails app every month — you could have bought 32 Heroku dynos instead.


Well you usually need at least two-three dynos plus a database. This takes the price to $100-$125 / mo which is still a good savings over hiring a sysadmin.

But some services provide a good subset of heroku functionality on your own servers for a flat fee which is where I think the sweet spot is now.


Azure app service, deploy to it with MSBuild (which we fire off from teamcity, which git clones, runs build scripts, then deploys with msbuild). App services have a 'staging' deployment slot, so you can deploy to the staging slot, test it, then swap the slots and you are live.

App Services are cheap and easy to manage, if you write efficient code they have plenty of horsepower for medium sized websites.


What parts of Heroku in particular are you interested in seeing in an alternative? If a friendly user interface is a big part of the ask, your options are unfortunately limited (at least among the big cloud providers, I'm unfamiliar with smaller providers).

AWS's Elastic Beanstalk doesn't have any UX to speak of (just a few options to fiddle and some weak logging support). It's a very raw service, much like the rest of AWS (rock-solid infrastructure to bring-your-own stuff). They take care of infrastructure but developer pleasantries are entirely up to you.

App Engine is significantly better on the UX front - you get error reporting, metrics, logging, etc all in a single cohesive web app. In my experience I hit some hard to debug / resolve quirks, but that is very much a ymmv situation. It can be tricky to figure out how to configure the right pieces / permissions in their new app engine variant (docker-based). I wouldn't use the classic variant at this point, it's pretty heavy on the vendor lock-in front. It's great if you need the specific capability of classic app engine but that's most likely not what you need.

If you're using Heroku's hosted postgres, know that GCloud's postgres support is still "beta". AWS RDS on the other hand has very good postgres support.

In both cases, deploys aren't just a git push, but use a custom CLI (`eb deploy` and `gcloud something something`). They're also both pretty typically tough to get to the first successful deploy with - for example Elastic Beanstalk will spend quite a while attempting to recover from deployment errors, and if you've never deployed a successful version it's very bad at that, and it also blocks deployments while it attempts to recover. So you end up stuck while it attempts to recover from a problem it will never recover from for a bit (this has been a problem for literally every beanstalk service I've ever deployed, heh).

You can also go something like the hosted Kubernetes route. Currently GCloud is king here, but naturally that's a command-line only UX unless you deploy your own kubernetes UI service (unfamiliar with options there)

Unfamiliar with Azure's offerings.


AWS, mostly. On my servers I have simple shell scripts that have functions for pulling and running Docker images. Locally I use rake to build and push Docker images. Finally, Rake executes the deploy script on the server via ssh which pulls and runs the new images.

It's not fancy like dokku (or even Docker Compose), but it's composed of very minor pieces that are easy to debug and extend.

Not knowing if Docker Compose has executed successfully, and if I'm on the newest image or not, grew to be an extra todo in my checklist when debugging my applications.


I was in the exact same position as you, looking at paying over $100/mo for a simple side project on Heroku. I wound up deploying it to my own VPS for $5-10/mo instead, and it actually ran faster was was much more reliable.

I am working on turning this into an app that others can use too. It's still a ways away from real production use, but I am close to a closed beta. If you're interested, check it out:

https://www.seamless.cloud/


Flynn [1] is worth a look (having come from dokku). Heroku-like and runs as a cluster.

[1]: https://flynn.io


Are they still knocking about? They used to do a weekly blog post but nothing since March 2017


I know this post is young but surprised not to see Linode on here yet. We are deploying to Linode and have recently switched from Puppet to Saltstack to manage our servers. There's a bit of Fabric and Capistrano to glue it all together.


How does Linode compares to DigitalOcean? I see that at least price wise DigitalOcean looks cheaper.


Take a look at https://nanobox.io/, I'm using it for side projects and it's great and cheap (I'm not related to them)


Also the dev experience that it give is awesome


I use the hetzner cloud[1] that has been recently launched. It is pretty fast and not too expensive.

[1]: https://www.hetzner.de/cloud


We've bee using a mix of Heroku and Firebase for static hosting.

Lately we've been moving small services to cloud functions in order to shut down Heroku dynos and it's been great so far.


If you like the Heroku user experience you might like the Serverless Framework[1] as a front-end to AWS Lambda. In my case the app that I was deploying was already a WSGI Twelve-Factor app[2] so I created a yaml file and deployment just worked.

[1] https://serverless.com/ [2] https://12factor.net/


I continue to use Heroku for a few applications. Since the last few application I have worked on are B2B apps they don't get much usage in the off-hours. I wrote a heroku-addon[1] for scaling down on nights/weekends so the cost isn't bad overall.

[1] https://elements.heroku.com/addons/flightformation


At present, we use a network of bare-metal servers, which are managed by Chef, with a custom deployment system which deploys build artefacts from CI as defined in Chef. It's tidy, smart and quick – but it does require a fair bit of infrastructure.

We're in the middle of piloting a switch to Nomad (https://www.hashicorp.com/products/nomad), replacing much of the custom deployment system with it, though still running on bare-metal servers. It's an absolutely fabulous bit of software that really hits the use-cases for an organisation of our size, so I'm excited to see how that works out.

For side-projects where things like HA aren't much of a concern, I've settled on some minimal shell scripts to deploy tarballs over SSH. It's actually pretty neat – CI builds a project, and runs a small script to copy the results to a server and restart the services. It's always worth considering the simple solutions if you don't need the more advanced features!


I know it's not useful for you, but: In our own datacenters, using Kubernetes on baremetal.


I'm looking into this. How and how difficult is it to get K8 onto baremetal? Could you point me towards tooling and best practices for running K8 on baremetal?


The short answer is: It's pretty ugly. I don't know the details, but we PXE-boot and install CoreOS, then run kubelet using rkt (using a systemd service). The other k8s components (etcd, apiserver, controller manager etc.) are managed by the kubelet using static manifests. Persistent volumes are backed by a separate storage appliance via NFS.

Our team built this entire process ~2 years ago, when we started using k8s. We would probably use some off-the-shelf parts today, but practically nothing existed back then.


I can recommend APPUiO [0] which is a Swiss based container platform running OpenShift [1] and providing a full PaaS experience. The crew is reachable under [2], answering any questions in the chat. Disclaimer: I work for VSHN, the company behind APPUiO.

[0] https://appuio.ch/en/public.html [1] https://www.openshift.org/ [2] https://community.appuio.ch/


In making a selection like this you need to separate the deploy UX (git push heroku master) from the underlying service.

Things like dokku, etc are great at providing the UX and generally really solid.

However, much of the benefit of Heroku is that they handle some/much of the underlying sysadmin tasks you'd otherwise need to worry about. It's easy to discount that as it's mostly invisible until there's a serious problem.

A middle ground between putting dokku onto a VPS is perhaps Amazon's Elastic Beanstalk service combined with Amazon RDS which provides you a big chunk of the functionality (albeit in a less slick wrapper).


I used to think Heroku was too expensive, and I set out to build a service as comparable to Heroku but using more open source stuff to bring costs down, including bringing the friendly heroku interface to your own servers, so it didn't matter if you had 1 server or 30 servers, you'd pay a single small fee.. I even became a maintainer of Dokku in my time of getting very familiar and contributing to it's plugins and core while building my service.

Then I realized Heroku really isn't that expensive anymore, and I dropped it all and started building my projects on Heroku again.


Heroku and AWS, depending on application. Heroku is a bit pricey, yes, but on the other hand they've provided really good service over the years.

In general all my deploys are done using Travis CI.


Work: AWS.

Personal projects: Github pages (mostly react apps, e.g. [1, 2]), GAE (python backends, e.g. [1]), Google Spreadsheet (data backend, e.g. [2]), Firebase (e.g., [3]).

[1] http://priceeth.github.io/

[2] http://hasgluten.com/

[3] http://distrosheet.com/


I use DO Droplets (5$ - 10$) to host multiple static websites with server blocks or multiple applications (Node.js) with Dokku. So far, it has worked out for me very well. If you are interested in Dokko, you can find a guide for my setup over here [0].

- [0] https://www.robinwieruch.de/deploy-applications-digital-ocea...


Mesos running on AWS, almost entirely with EC2 spot instances.

We were big Heroku users as well, so carried over a number of Heroku patterns. For example, configuring everything with ENV vars, lightweight load balancing, grouping apps into several deployable targets, etc.

It’s certainly not better than Heroku in most ways, though there was no plausible way for us to continue running our workload on Heroku.

(Edited: typo)


As a sole developer, I have the luxury of keeping it simple - git push and if the tests pass, then ftp's to www.pythonanywhere.com


I just listened to the web platform podcast [0] episode about WeDeploy [1]. Haven't tried it yet, but it sounds really nice.

[0]: https://thewebplatformpodcast.com/155-wedeploy

[1]: https://wedeploy.com/


If you like the git push flow of Heroku, i would recommend you to check out Hasura - https://hasura.io - Its a Baas + Paas for containers. Everything about your project is declarative and your apps are dockerized and deployed on to a Kubernetes cluster with free SSL.


Heh, nice timing with this question! We just published an article on how we deploy at Kiwi.com (to Rancher) yesterday: https://code.kiwi.com/announcing-crane-e8ce911b187b


Ugh, this level of abstraction is dizzying. Now you telling me I need Crane to deploy to Rancher to manage Kubernetes to orchestrate Docker containers to run my app?


Well, almost. We don't use Kubernetes, but Cattle (which is part of Rancher.)

I don't entirely understand your point though; is our level of abstraction too low? With Heroku (preference of the OP) all of this, the containerization, the orchestration, the deploy tool, is abstracted away. Is that what you'd prefer?


Have been rolling my own server on Scaleway / Hetzner and deploying with Exoframe [1] for past ~year. Works pretty well :)

[1] https://github.com/exoframejs/exoframe


Past AWS. Now Digital Ocean, happy customer, administrating https://eddtor.com (free) + http://memoria.email front page


Our API Infrastructure (~40 servers) is spread across Linode, DigitalOcean, and Vultr. We use home-rolled scripts to build machines from scratch (aggressively tear down and rebuild to avoid maintenance windows), and use Ansible to deploy any code updates.


personal heroku/dokku clone, which is really just a remote git with some hooks that build images and provision to docker.

haproxy in front loadbalancing and doing routing

tinc mesh for a ghetto private cloud with dirt cheap boxes

consul k/v store used for runtime configuration


Using vultr.com for a few side projects. I'm planning to release an iOS app with a NodeJS and PostgreSQL backend. I find it to be somehow cheaper than DO but cannot say anything about the reliability.


I have a simple bash script that uses git, rsync and build tool for my stack (sbt). Builds and deploys my project to any (linux) server on the planet with ssh access. Simple, fast, effective and painless.


Some AMIs built with Packer, auto scaling groups, ELBs, CloudFront, Jenkins, RDS, Elasticache, and some scripts in the repo. I build simple apps; if they got crazier I might use a cloud managed k8s.


Try `now` from https://zeit.co, it's super pleasant. Hacker friendly. Though, mostly nodejs centered (AFAIK).


It supports Dockerfile so you can deploy any backend technology using Docker containers, not only Node.js


Dokku, running on an HP DM1 laptop in my cupboard. My ISP gives me a static IP and 100/100mb connection, so it’s quite good for my personal projects and some work tooling!


We switched MindMup from Heroku to AWS Lambda (+S3+Cloudfront for web page hosting), gradually in 2016, and I’m quite happy with the results. We deploy using claudia.js.


We use Ansible for everything, deploying into AWS. It's really great and I found it to be far easier to wrap my head around than anything else I looked at.


Azure App Service, deployed via Visual Studio Team Services. App Service supports Git push deployments, but VSTS plays nicely with it (as you'd hope).


Using AWS CodePipeline (Github -> AWS CodeBuild -> AWS CodeDeploy). It works nicely with all of our AWS resources. Looking to migrate to Kubernetes.


Docker managed by Kubernetes and deployed in AWS.


Rancher is actually pretty solid. The API is actually pretty good, allowing me to write some automation scripts for everyday tasks.


Dedicated servers colocated in town, via shell scripts, libvirt, and an in-house continuous integration service.


Mesosphere DC/OS, AWS, ECS, EKS, EC2, cloudformation, Jenkins, deployment API developed at home grown...


AWS, because we have to at work. Heroku for personal projects, because it's easy and fuss-free :-)


aws lambda with aws-sam-local for serverless apps and aws s3 and cloudfront for html/css/js frontend

For more heavier stuff I use cloudformation and to deploy docker images to ECS

The AWS-Stuff was a steep learning curve in the last months but worth it.


If you're using Elixir, take a look at gigalixir.com I'm the founder.


+1 for Gigalixir! Jesse is super helpful too.


Gitlab for CI/CD then deploy to AWS, Google Cloud, or Firebase hosting.


What folks use for deployment strategy for machine learning based models?


Manually deploying using scripts to RHEL virtual machines.


Netlify is fantastic for deploying static sites.


we're using jenkins to deploy our monolithic java ecommerce applications onto baremetal servers, now slowly moving to aws


surge.sh is stupid-simple for static sites.


I love surge, it's been awesome.


On my localhost. :-D

For a quick demo, on a DMZ raspberry pi.


Google app engine


DigitalOcean


Have you tried dokku?


vmware guests with powershell and jenkins




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: