A nice follow up to this would be NATing external access from all dynos (web and worker) for an application through a single unique IP per app. That way external resources (eg your app's database) can be white listed by IP. One of the reasons the major Postgres bug last month was so bad for DBaaS providers (like Heroku Postgres) was that most leave inbound access totally open. That's great when you're first getting started as you don't need to explicitly configure firewall settings but it's terrible for production.
I'm not sure how economical it is to provide this for the free tier though I think a paid option would be viable. AWS charges $3.60/IP/month so it's not that expensive and I'm sure there are plenty of folks who would pay $10, $20, or even $50/month for a unique outbound address.
Funnily enough this happened with Heroku and Facebook just a couple of days ago - a bunch of apps got blocked from using the Graph API until Facebook removed the banned AWS IPs.
Wow that is pretty steep pricing. The lowest tier that seems usable is the $75/mo one. Then again if you're locked into Heroku then even $1,250/mo isn't that much for piece of mind and added security. I'm sure whoever is paying for it is happy it's available.
This was one of the reasons we handled server deployments ourselves for our startup. The cloud version of our app runs in production on AWS in a VPC so all outbound traffic is NATed through a single public IP. With reserved instances it costs about $22/mo for the NAT gateway and setup was pretty straightforward.
> all outbound traffic is NATed through a single public IP. With reserved instances it costs about $22/mo for the NAT gateway and setup was pretty straightforward.
I've considered doing this myself so I'm curious: are you getting good network performance through the NAT gateway? What instance type are you using?
No issues to speak of so far. Our app databases are in the VPC itself so our own traffic does not get routed through the NAT gateway but I haven't heard any issues from users either.
At the moment we're using an m1.small for the NAT itself though if you have perf issues you can bump that up as well. I would guess a c1.medium would be more appropriate though as I said we haven't had any issues so haven't considered changing anything yet.
Here's a speed test through from a m1.small through the NAT (27.5 MB/s):
$ curl -o /dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 500M 100 500M 0 0 27.5M 0 0:00:18 0:00:18 --:--:-- 29.2M
Here's a speed test from a m1.small vanilla EC2 instance outside the NAT (38.2 MB/s):
$ curl -o /dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 500M 100 500M 0 0 38.2M 0 0:00:13 0:00:13 --:--:-- 34.1M
This was also totally random and gave us some serious headache: one dyno at the time starts failing with Graph API. Restart helps but only for some time until Heroku randomly cycles your instances. Would be happy to pay little extra for a reserved IP.
> A dyno should let you run any application or set of processes that you would run on your local machine or on an old-school server.
As soon as you allow multiple processes per dyno the abstraction becomes less clear. It means that now a dyno is more like a small VPS and I have to know which process is on which dyno for communication.
You used to be able to peek at how many other dynos you were sharing an instance with by running
netstat -l | grep lxc | wc -l
I sampled the resulting number a bunch of times and usually got 100 ± 25. I'm guessing this isn't possible anymore – not that it was useful, just interesting.
I'm not sure how economical it is to provide this for the free tier though I think a paid option would be viable. AWS charges $3.60/IP/month so it's not that expensive and I'm sure there are plenty of folks who would pay $10, $20, or even $50/month for a unique outbound address.