Hacker News new | past | comments | ask | show | jobs | submit login
New Dyno Networking Model (heroku.com)
79 points by friism on May 2, 2013 | hide | past | favorite | 18 comments



A nice follow up to this would be NATing external access from all dynos (web and worker) for an application through a single unique IP per app. That way external resources (eg your app's database) can be white listed by IP. One of the reasons the major Postgres bug last month was so bad for DBaaS providers (like Heroku Postgres) was that most leave inbound access totally open. That's great when you're first getting started as you don't need to explicitly configure firewall settings but it's terrible for production.

I'm not sure how economical it is to provide this for the free tier though I think a paid option would be viable. AWS charges $3.60/IP/month so it's not that expensive and I'm sure there are plenty of folks who would pay $10, $20, or even $50/month for a unique outbound address.


Funnily enough this happened with Heroku and Facebook just a couple of days ago - a bunch of apps got blocked from using the Graph API until Facebook removed the banned AWS IPs.

https://addons.heroku.com/proximo does what's needed, but at a serious price for high volume.


Wow that is pretty steep pricing. The lowest tier that seems usable is the $75/mo one. Then again if you're locked into Heroku then even $1,250/mo isn't that much for piece of mind and added security. I'm sure whoever is paying for it is happy it's available.

This was one of the reasons we handled server deployments ourselves for our startup. The cloud version of our app runs in production on AWS in a VPC so all outbound traffic is NATed through a single public IP. With reserved instances it costs about $22/mo for the NAT gateway and setup was pretty straightforward.


> all outbound traffic is NATed through a single public IP. With reserved instances it costs about $22/mo for the NAT gateway and setup was pretty straightforward.

I've considered doing this myself so I'm curious: are you getting good network performance through the NAT gateway? What instance type are you using?


No issues to speak of so far. Our app databases are in the VPC itself so our own traffic does not get routed through the NAT gateway but I haven't heard any issues from users either.

At the moment we're using an m1.small for the NAT itself though if you have perf issues you can bump that up as well. I would guess a c1.medium would be more appropriate though as I said we haven't had any issues so haven't considered changing anything yet.

Here's a speed test through from a m1.small through the NAT (27.5 MB/s):

    $ curl -o /dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  500M  100  500M    0     0  27.5M      0  0:00:18  0:00:18 --:--:-- 29.2M
Here's a speed test from a m1.small vanilla EC2 instance outside the NAT (38.2 MB/s):

    $ curl -o /dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  500M  100  500M    0     0  38.2M      0  0:00:13  0:00:13 --:--:-- 34.1M


This is great info - thanks! :-)


I can't believe all people don't use VPC whenever they have >1 instance in AWS.


It's actually now the default for new accounts.

http://aws.typepad.com/aws/2013/03/amazon-ec2-update-virtual...


This was also totally random and gave us some serious headache: one dyno at the time starts failing with Graph API. Restart helps but only for some time until Heroku randomly cycles your instances. Would be happy to pay little extra for a reserved IP.


Heroku engineer here. Thanks for the suggestion, we will definitely consider it in the future.

Meanwhile, there is an add-on that does exactly this: https://addons.heroku.com/proximo


A bit puzzled why people would use the local IP address for inter process communication on a single host, rather than unix domain sockets.


Not all services support unix domain sockets. It's one more option on the table, but I agree that domain sockets should be favored in most cases.


This seems to go contrary to the idea of managed processes. Wasn't Heroku supposed to abstract the idea of host all together ?


It is still abstracted, nothing changes from the point of view of most applications.

It just opens more possibilities and increases the isolation.


From the article:

> A dyno should let you run any application or set of processes that you would run on your local machine or on an old-school server.

As soon as you allow multiple processes per dyno the abstraction becomes less clear. It means that now a dyno is more like a small VPS and I have to know which process is on which dyno for communication.


You don't have to run multiple processes if you don't want to. You shouldn't have to change the way you do anything of you're happy with your app now.


You used to be able to peek at how many other dynos you were sharing an instance with by running

    netstat -l | grep lxc | wc -l
I sampled the resulting number a bunch of times and usually got 100 ± 25. I'm guessing this isn't possible anymore – not that it was useful, just interesting.


Back in the day you could just do "ps -x" and see every process running on the machine :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: