Hacker News new | past | comments | ask | show | jobs | submit login
Git Push Heroku Master: Now 40% Faster (heroku.com)
180 points by JoshGlazebrook on Feb 11, 2014 | hide | past | favorite | 44 comments



I'm really glad to see Heroku focusing on these types of optimizations. We've been using similar optimization strategies internally at my Rails consultancy for over a year [1], and it's amazing how much fun it is to ship features when you can add a small feature, check your test suite via CircleCI, and be live quickly thereafter.

I remember the pain of deploying Huckberry [2] on Rails 3 a couple of years ago. Each deploy took ~6 minutes, and it'd drive us crazy. A whole lot of "Compiling!"-type [3] moments. (Back then, the Asset Pipeline compiled all of your assets twice: once with the real filename, once with the digest-filename: 'filename--f74c093df554be59d45d3c87920eba1f.js'. As you can imagine, this was quite slow.)

This is one of the reasons I really love Heroku as a development platform. We're a 4-person shop—we can't spin off a resource to spend 2 weeks speeding up our deployment process. But with Heroku, we wake up one morning, and our processes our 40% faster. It's fantastic.

[1] https://github.com/heroku/heroku-buildpack-ruby/pull/96

[2] https://secure.huckberry.com/

[3] https://xkcd.com/303/

[edit: formatting]


That's great!

My biggest problem right now in term of deployment time is to sync with amazon for the assets. It literally takes 5 minutes for a small website..


Yeah, that's frustrating. I might suggest you just serve them from your app [1], and use CloudFront with a custom origin to take over from there. I can deploy a Rails 4 app in ~20 seconds.

[1] http://guides.rubyonrails.org/configuring.html#rails-general...


At Heroku we now [recommend using AWS CloudFront][1] to serve assets, backed by your app running on Heroku instead of with S3. It's a much simpler setup with less state and fewer moving parts - and builds are faster :-).

[1]: https://devcenter.heroku.com/articles/using-amazon-cloudfron...


Thanks, really happy to see this is the officially recommend approach.

I've been doing it, but got some uncomfortable knee-jerk reactions from other developers, e.g. "But you should NEVER serve static assets directly!!!".

...but it's only served once!


> e.g. "But you should NEVER serve static assets directly!!!".

As with music, painting, and architecture, any development-related NEVERS require that we understand the why, so we can know when to break the rule.


Isn't this still problematic with SSL? Last time I checked, you had to pay $600/mo for a custom CloudFront SSL cert if you wanted to pass through the credentials from your own domain.

(Using mixed http/https was not an option.)


It's true that this is a problem if you insist on serving your CDN assets a custom domain CNAME mapped to CloudFront (i.e. https://assets.mydomain.com/). It doesn't cost extra if you just use the default CloudFront distribution URL, i.e. https://d3vam04na8c92l.cloudfront.net/stylesheets/applicatio...

This is not entirely clear from the Heroku docs, I'll ping the maintainer for an update.


One cool trick for serving mixed http/https that I somehow went 10 years without finding out: you can just point your Rails asset_host to "//d3vam04na8c92l.cloudfront.net". Browsers understand this to mean "Look up this address via whatever protocol I am currently on, http if http, https if over SSL."

This is a huge cache boost in Rails because you don't have to cache your pages twice (once for HTTP once for HTTPs.)


Oh god.. While your post is cool, I reacted violently to the "One cool trick..." part at the beginning.

Upworthy, you've broken me.


Is there any real downside to this approach, or is it just that it looks "less professional"?


No, you're still serving the same assets. It's only really an issue if your users like seeing where network requests are going and spot the cloudfront URL.


Does Google have something like CloudFront for Compute Engine? Google gives you edge-cache for free if you use App Engine, but what about GCE?


I have been using this asset_sync gem (with Ruby on Rails), together with S3 & Cloudfront, for 2 years and it works wonderfully: https://github.com/rumblelabs/asset_sync

I basically never think about deploying assets. It nearly always just works.


It works but they are a bit slow on merging pull requests. I'm currently using a fork so that CORS can work on Rackspace. I think this sort of issue is why Heroku no longer recommend this approach.


It's not recommended as asset precompilation usually requires access to the environment (e.g a DB connection), and thus isn't deterministic.


Don't sync. Compile on deploy and let Cloudfront fetch when needed. It should also be possible to cache assets in memcache instead of compiling on each deploy (personally I ran into some issues with it but can't recall why).


Something isn't right. Sounds like you're recalculating the cache digests each time..

Make sure you have your cache store set to dalli (production.rb)

      config.assets.cache_store = :dalli_store


that seems like a lot of time for a small website. Are you re-uploading the assets all the time?


Have you tried dedicated CDN service?


That's great, Now, if only Heroku were 40% less expensive...that would be progress.


A little time spent learning how to use a server and build an actual scaling infrastructure, and Heroku turns into dust for dumb dorks. Heroku is the MSFT of servers, lock yourself in, and prepare to feel the bind as it gets tighter.

Seriously people, DIY, it isn't hard. Let the "180 websites in 180 days" [1] girl use Heroku, she's giddy to get anything working. Real hackers should not be satisfied with lock-in, ridiculous prices that correlate with the ineptitude of the user-base, potential downtime on top of AWS normal downtime (what Heroku uses in the background.) If you're serious about your business, Heroku makes no sense at all, lest you be a non-technical cofounder, and it makes you feel 'safe.'

[1] http://jenniferdewalt.com/


It seems you're trying to insult someone who's making an effort to learn some new skills. Not sure what this has to do with the topic at hand.

As to the point of DIY not being hard, getting something working may not be hard. But handling fault tolerance, logging, backup, downtime, etc. is non-trivial. And as mentioned in other comments, this takes time. Even if you're a l33t ub3r h4x0r, that will always be true. For some teams, the DIY investment will make sense, others would rather spend their time building features than dealing with ops.


This conversation isn't geared toward insulting Jennifer, it's an attack on the lameness of using Heroku. I think Heroku users should each get a sticker that says 'I'm a proud black box user'


If you're serious about your business, maybe you want to spent time building and running the business instead of playing sysadmin. Especially when you're a small team. Let's say you pay Heroku $500 a month, which is already a somewhat mature business. On the other hand, you could pay a very cheap sysadmin $50/hr. So you'd get 10 hours of her time. You'd get way more value out of the $500 for Heroku. My time, as a founder, is worth way more than than $100/hr; I'm not going to get my hands dirty in being a sysadmin.


You act like a Heroku gives you a personal sys admin. At best, they keep your servers updated at a laggy pace to maintain major stability rather than security.

Heroku is not a replacement for a sys admin. In fact, you get less with Heroku because you can't even access the systems running your code.


So this might be an aside, but given all the complaints about how expensive Heroku is (despite the free tier, how they handle everything below the app, it's the gold standard in PaaS) has anyone tried out competitive services like Cloud Foundry [1], OpenShift [2] or Cloud66 [3]? Cloud Foundry and OpenShift are either hosted with similar free tiers or you can set up yourself, and Cloud66 is more for the DIY crowd being free for development environments. I do appreciate Heroku's free tier for toy projects but if you had to scale on a budget (and could handle the setup) don't know if that'd be a better starting point than the other options.

[1] http://www.gopivotal.com/products/cloud-foundry

[2] http://openshift.com

[3] http://cloud66.com


I wanted to give Heroku a try for a new website my brother and I are building and looking at the costs it's quite steep.

$35 for a measly 2 dyno application, and another $50 for a production postgresql database.

Our use case is REALLY simple, we aren't building some freak sideshow redis golang nonblocking nodejs mongodb ravendb nqueue sync rabbit hyperthreaded monster. We're just building a Rails 4 site, with a PostgreSQL database backend. That's ENOUGH for our use cases. For what we need it's infinitely cheaper to just rent a Linode/DigitalOcean server, use something like Chef to spin up the services we need and voila. We probably won't have to touch the servers for a very long time. Again: We're doing simple things - CRUD basically.

I love Heroku for it's simple usage but it's just too darn expensive for us.


I agree it's not cheap.

What is your alternative that will give you as useful a deploy environment (for whatever kinds of 'useful' you need, everyone has different needs)? How much will it cost, in direct costs or your time? Do you or someone on your team have the expertise and time to provide and maintain such a deploy environment?

If there's something that is good enough but cheaper, than by all means use it.

The reason heroku is so succesful despite not being cheap is that for many people, there isn't. Which means while it may not be cheap, for many people it's a pretty darn good value.


> For what we need it's infinitely cheaper to just rent a Linode/DigitalOcean server, use something like Chef to spin up the services we need and voila. We probably won't have to touch the servers for a very long time. Again: We're doing simple things - CRUD basically.

To be fair, all of that takes time (and therefore, $$). For side projects it's probably not a big deal--the learning process itself is valuable--but if you're actually talking hard sums, Heroku isn't always as expensive as it seems.

You'll need to manage:

- Chef cookbooks and testing (this is a big one) - Deployment (using Chef standalone? Fabric? Git server + post-receive hooks?) - Backups (are you pushing WAL logs to S3? Have you tested recovery?) - System resourcing (didn't write a logrotate config for that custom service? Out of disk space?) - Monitoring (Pingdom, New Relic, etc.) in case it does go down? - etc, etc.

It all adds up. For my own side project I went down the DO + Salt + Packer path, but my day job isn't as a dev/programmer so I had to spend a little time learning the idiosyncrasies of Salt and Upstart. These skills are valuable (for the next project down the line), but if I was part of a small team with a deadline then taking "detours" isn't always an option.


Hey Sergio, my agency's website [1] has been happily hosted on Heroku for a year and I pay approximately $0 for it. You have to set up your site for performance—caching, CDN, use Unicorn, the like—but unless you have something like 30 concurrent users, I think you might be overthinking it. You definitely don't need a production-level PostgreSQL database.

That said, I'm pushing my developers to use Middleman [2] for mostly-static sites these days. No moving parts means nothing can go wrong, and we host on GitHub pages for free. Not sure if this fits your use case.

I'm a bit of a Rails performance nut, feel free to ping me at nj@thirdprestige.com if you'd like to ask me more specific questions.

[1] http://www.thirdprestige.com/

[2] http://middlemanapp.com/


Yeah that number is pretty abysmal. We average around 100+ users per second per Google Analytics. That means, we might need about 4 dynos for our simple website? Yikes!


If it's really "simple" (not dynamic, or doesn't change often), you should throw the whole site behind a CDN. Then you can support tens of thousands of users very in expensively.


1X dev dynos are free. If you need to go up to 2 dynos or scale vertically to a 2X dyno then you start paying. You shouldn't really need to use 2 dynos until you actually have to launch. You're going to be spending a lot of time configuring/writing Chef cookbooks and refining your deployment process. This is time that you could be spending developing features and making your application more reliable. That's where the savings come in. It's not more expensive; it's less.


It just depends upon what you value doing with your time, whether you value paying more for convenience, or paying less but having to deal with everything. It just boils down to a design choice.


Well, no, I disagree. Having to create, maintain and monitor your own infrastructure is a very big time cost which has very real consequences when you are in a business. As a business you should be spending as much time as possible focusing on your product and not things that are extraneous to it. At this point the basic infrastructure for 99 % of small web applications is pretty much identical: a web service, a backend relational store, a load balancer, and probably a caching service. This basic set-up can accommodate a wide range of features. There is absolutely no need to spend time configuring these services and making sure they remain up and running. I see a lot of people who want to configure their own infrastructure because for some reason they believe their application is special. Invariably, if you look at their application more closely I've realized that, no, it tends not to be that special. Perhaps when/if they scale up the service requirements they will need to be become more specialized but even then, probably not. Even if it does end up that you have to change your application's structure during scaling, trying to make it ready for scaling too early is a premature optimization and will end up costing you.


> We probably won't have to touch the servers for a very long time.

Until somethings fucks up. If you are on Heroku they will handle DB, routing and server problems. On Linode you have to deal with that yourself or hire a sysadmin.


I have thought the same thing. I also have a few sites which get little traffic 99% of the time which the free plan works fine. I would pay a fee to get it so the app doesn't always have to spin up on the free plan.


How is this relevant?


I meant to reply to nthj's comment.


The sad thing is, deploying code should be this easy and fast whether or not your are using Heroku.


This is so awesome. Though I only have 2 servers and repo is not very huge but sometimes it takes close to 10 secs to deploy. A 45% decrease for python is incredible


10 seconds? Buddy... You've got nothin to complain about.


Just did a node.js deploy and it definitely felt faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: