Hacker News new | past | comments | ask | show | jobs | submit login

I'll start the ball rolling with an easy one...

How did it go?

Can you talk through some of the changes you had to make, both from a library perspective, as well as any architectural changes that were required?




Overall, it went well, bumpy at first because we had to learn about Mongo's performance characteristics. Having a full-size database to work with made it a bit easier to identify friction points early on.

To make the most of it, we bit the bullet rewrote the whole application. You can't join results from two collections in MongoDB so we had to denormalize quite a lot.


Can you elaborate a bit more on the scope of the rewrite? I'm assuming you didn't have much front end work to re-do, mainly middle to back end code?

Did you have anything you could re-use, such as your high level business logic classes, or (and this is meant non-judgmentally) was your code too deeply wrapped around the existing data store, or the idea of the data store as a RDBMS, to make any of it reusable?

Was your logic primarily in Stored Procedures, or (and again i'm assuming here, this time that yours is a .NET environment since you called it 'SQL Server') did you make heavy use of LINQ to handle those kinds of things?

Can you talk about those performance issues, and how you handled them? (For my third assumption, i'll go with Indexes)

Looking forward to reading the in depth article. Thanks for fielding these questions


We practically rewrote the whole application. Some code was spared, but I would say that less than 20% of the original source was reused. This was not just because of the migration to MongoDB but more because we decided to take the product to a different direction.

Fundamentally while we could have migrated without a full re-write, SQL Server was only one of the technologies we wanted to replace. One thing led to another and eventually we decided to re-launch the product instead of iterating on it.

As for the performance issues, most of them are stemmed from MongoDB's current locking strategies. I heard 2.0 handles it a bit better but we haven't rolled it out to production yet. Planning your indexes carefully so they fit in RAM is very, very important to assure a high throughput with MongoDB.


I just had one last question:

Did you evaluate any other "NoSQL" DBs, such as Riak? If so, what was your main impetus for choosing Mongo? Did you go with mongod or mongos for your environment?


Yes, we evaluated several other alternatives. Ultimately, we felt that MongoDB's was at the sweet spot of best fit (to our needs) and maturity.

We tried to run Mongo on Windows and that was a bit of a disaster so we are running it on Linux.


Just to clarify it: MongoDB main target platform is Linux, the Windows version is clearly a second-class citizen at this time. Not only the Windows version performs poorly under I/O pressure, it also crashes and leaves the database in a corrupted state (again, this only happens under significant I/O pressure, but it indeed happens).


I've only ever run it on a linux environment, I have not heard great things about the windows version.

Are you guys using Mongos (for running several nodes) or just standalone Mongod?

If Mongos, can you talk a little about your experience in setting that up? If not, can you speak to why you chose to run it in a single node?


Sorry, I forgot to answer that, we are running mongod :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: