Hacker News new | past | comments | ask | show | jobs | submit login

The number of machines you need to run a service is not really a linear function of your traffic. If you have a mostly static website that can be heavily cached/cdn'd, you can easily scale to thousands of requests a second with a small server footprint. I expect that's true of many of the top 100 sites as measured by visitors (like Quantcast does).

But if you need to store a lot of data, or need to look up data with very low latency, or do CPU-intensive work for every request, you will end up with a lot more servers. (The other thing to consider is that SaSS companies can easily deal with more traffic than even the largest web sites, because they tend to aggregate traffic from many websites; Quantcast, for example, where I used to work, got hundreds of thousands of requests per second to its measurement endpoint.)




Note: the site I mentioned did hit the database quite a few times for each page. It was a nice challenge.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: