Hacker News new | past | comments | ask | show | jobs | submit login

It is a large site by traffic measure but I would guess the traffic is heavily read only. Managing workloads with more data mutation introduces different complexities which mean you can't just cache everything and accept the TTL for writes based on cache invalidation.

edit: To be clear, not saying SO isn't an achievement, but its one type of use case that yields a really simple tech stack.




Their stats are here:

https://stackexchange.com/performance

Their DB handles peak of 11,000 qps and peaks at only 15% CPU usage. That's after caching. There are also some ElasticSearch servers. Sure, their traffic is heavily read only, but it's also a site that exists purely for user-generated content. They could probably handle far higher write loads than they do, and they handle a lot of traffic as-is.

What specific complexities would be introduced by an even higher write load that AWS specifically would help them address?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: