Yes, white paper is not to better too. From white paper :
"95% reduction in operational overhead
• MongoDB Ops Manager reduces tasks such as
deployment, scaling, upgrades and backups to just a
few clicks or an API call. Continuous, point-in-time
backups and real-time alerting on over 100 system
metrics help ensure always-on availability. Ops Manager
is available as part of MongoDB Enterprise Advanced.
• Greater control over MongoDB’s logging granularity
coupled with the addition of severity messages to each
log message makes it possible to yield deeper visibility
into the database for diagnostics and debugging,
without overwhelming DBAs or systems with
extraneous log data."
Seems like
1. they have added tools which will be little painless for the some common ops db tasks.
2. Better logging for debugging. I.e. a feature for the ops teams or help the mongo db support better ?
The Ops Manager tool does a lot of tasks for you automatically now. Upgrading, taking backups, recovery and deploying new servers into a new or existing cluster. These were the types of tasks that were a pain to do manually and even using config management/scripts, wasn't always easy to do.
I don't know where they get the 95% number from, but Ops Manager tool does make operating MongoDB many times easier.
The way they advertise this makes me think the old version is a horrible, slow, piece of junk.
After having spent the better part of two decades working with RDBMS I tried mongoDB on a couple projects and found it has a place in the world, but a fairly narrow use case where it actually simplifies things rather than making them more difficult.
As always I'll wait for an update or two and let someone else try the shiny new features before I try it out.
>I tried mongoDB on a couple projects and found it has a place in the world, but a fairly narrow use case where it actually simplifies things rather than making them more difficult.
This was exactly my experience. I see the usefulness of a flexible schema, however only in the right situations.
Haven't they been making that claim since the first release? I remember their entire marketing strategy being convincing people that MongoDB solved any and all DB use-cases better than all existing tech. Which is why so many people who jumped on the bandwagon found themselves switching back off of it once the honeymoon was over.
This is very interesting, but I don't want to be the first to migrate a production 2.x DB to 3.0. Can someone else do this and write an article on how hard it was and if it actually ended up working? :)
To change storage engine to WiredTiger, you will need to manually export and upload the data using mongodump and mongorestore. But you can do it for members of a replica set separately. http://docs.mongodb.org/master/release-notes/3.0-upgrade/
Says who? Why does DBRef exist? I would like to see any non trivial application that doesn't use id's to reference between documents. Almost everyone is doing joins, just in the application layer, which is the worst place to do them.
WiredTiger shows a lot of potential, but it would be irresponsible to make such a radically different engine the default for everyone, even for new databases, without giving it some time to mature.
I'd be interested in benchmarks comparing this and ToKuMx - from what I understand, ToKuMx is supposed to have much better read performances than mongodb 2.x, and this new release doesn't mention anything about improving this.
ToKuMx is also supposed to use much less disk space, and this is something that this new release is meant to improve. It'd be interesting to know whether they caught up.
I once heard a TV commercial claim that their conditioner made your hair 95% more manageable. I guess it's a bit like that.