Hacker News new | past | comments | ask | show | jobs | submit login

at the time of accusation, WhatsApp was running on 16 servers! Erlang is idle for communication, they had like 1 million concurrent user on each server!!!



It feels like modern web design practices have grossly skewed people’s estimates of what it costs to deliver a service. Text messages are not fundamentally hardware intensive to deliver - unless you add tracking, “machine learning”, ad delivery, and mountains of JS for serving the app itself dynamically.


Or write the service in Ruby


I adore Ruby, but I think that's fair.

Twitter's failwhale-heavy Ruby experience is a good example. Ruby and especially Rails are good examples of trading hardware for programmer convenience. This is absolutely great when you're banging out an in-house app for 100 people, and there's no problem spending $1 per user-month on hardware. But Twitter's revenue is only $0.60 per user-month, and they need to spend on things besides severs. There is a reason that they needed to switch away from Ruby to a much more complicated architecture: https://blog.twitter.com/engineering/en_us/topics/infrastruc...

I also saw some people go the other direction. They had a Java app for serving a high-volume website. But the developers had a rewrite itch, and the execs were afraid they couldn't get acquired without a more hip technology stack. They even hired a fancy consulting firm to help, but when the first version was ready to go it was incredibly slow. Like two orders of magnitude slower to render a page. The rendering times were considered normal in Rails-land, but were a real problem at volume. So they spent another 6 weeks putting in a lot of caching while the ops people ordered a bunch more hardware.


Absolutely stellar talk by principal engineer Rick Reed about scaling their Erlang setup: https://www.youtube.com/watch?v=c12cYAUTXXs.


We use erlang for managing our device messaging, I've seen the board in the office crawl up to simply obscene numbers of connected devices... on one server. Multi millions.

To be fair it's tiny little messages, I'm talking one device does maybe a meg a day after communicating constantly


There's a talk one of the engineers gives[1] (that someone else posted here so I'm watching it) about their architecture that's published in March of 2014, and in it he talks about ~550 servers, which includes 250 multimedia servers and 150 chat servers.

The only places he mentions 16 is when he talks about the "multimedia database". I think there were 16 sharded database servers.

1: https://www.youtube.com/watch?v=c12cYAUTXXs


I am personally very impressed by this, and I suspect it is one of the less-mentioned reasons they were acquired by Facebook.

I remember reading this when it came out https://blog.whatsapp.com/196/1-million-is-so-2011


BTW they were running ejabberd.


I agree that what they did with server utilization was impressive. However the blast radius was pretty huge if something went down. Also wouldn’t scale to images/video very well.


16 servers for 10^9 users doing voip and uploading vids? How's that possible?


it didn't have VoIP when it was aquired, and back then sending media was way less common.


You can offload a lot of the work to client devices themselves.


what were they accused of?


I think he meant acquisition.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: