Hacker News new | past | comments | ask | show | jobs | submit login

That's what I suspected.

So, when your data volume exceeds what will fit in RAM, I'm guessing that your plan is to shard to multiple MongoDB servers. Is your plan to continue to add multiple replicas to each shard to handle DR?




What is really important is keeping your indexes in RAM, our data already greatly exceeds the amount of RAM we have available. Even our indexes are only partially in memory already and performance is still terrific.


2.0 has a new index format that should reduce your index sizes by ~20-30% to fit more in RAM. If you haven't looked at upgrading yet, it is probably worth testing with 2.0.1 to see how it performs in your use case. You will need to reindex() or restore from a dump to take advantage of the new index format.


Yes, we are aware and we can't wait to upgrade, we will probably do it at our next scheduled downtime :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: