Hacker News new | past | comments | ask | show | jobs | submit login

You would, but it's slightly misleading to say that two dollars buy you an hour on a supercomputer. It buys you an hour on a tiny fraction of a supercomputer. Once you reach proper cluster size, you'll pay significantly more; maybe even more than on a "normal" machine of similar power.



> maybe even more than on a "normal" machine of similar power.

To be fair, it has the added benefit of you not having to sell a cluster of PS3's on e-Bay after your number crunching is done... I think that could be the biggest change cloud computing brings to HPC.


when I price out my services (which usually come out to being cheaper than ec2) a rough rule of thumb is that the monthly fee should be around 1/4th the capital cost of the hardware.

When thinking about the rent vs buy equation, this is something to think about. If you only need it for a month or less, you almost certainly will save money by renting. If you keep the hardware for a year, you will nearly always save money by buying. (of course, it's a little more complex; if you have have a bunch of hardware knolwedge in-house and can sell hardware that was used for a month at nearly full price, it might make sense to buy even for one month's usage. If the opposite is true, e.g. there is something in your organization that prevents you from hiring hardware contractors, or you have budge for MRC but not for capital costs, renting might make sense longer. I just use the 'four months' as a starting place.)

This is leaving aside the "of reasonable size" bit. If you need 256MiB of ram and a tiny slice of a x86 CPU in a data center, it's nearly always cheaper for you to rent a virtual than to own a physical bit of hardware. But for large clusters, you usually need enough ram/cpu to justify buying and hosting a real (32GiB ram/8 core or better) server.


It's a bit more complicated than that: your computing needs may vary with time. One month you may need to do a huge amount of processing on a thousand-node monster while the next you would be perfectly happy with two Nvidia GPUs for number crunching and visualization. In order to justify the purchase of a given computing capacity, you need to make sure you will have use for it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: