I do really enjoy using Postgres for a hybrid approach, using a jsonb field to store a bunch of data about an object. It works very well, and query speeds are great since you can index fields.
We use PostgreSQL with normal cols and JSONB columns, as well, which works wonderful. High speed, high flexibility, and all the transactional quarantees Postgres gives you.
> By default, multi-document transactions wait 5 milliseconds to acquire locks required by the operations in the transaction. If the transaction cannot acquire its required locks with the 5 milliseconds, the transaction aborts.
Automatic cancellation rather than actual deadlock detection is going to be one hell of a footgun.
I'd argue this is a double barreled footgun as most usage of MongoDB is from garbage collected languages. One wrongly time GC and your transaction is dead.
> I'd argue this is a double barreled footgun as most usage of MongoDB is from garbage collected languages. One wrongly time GC and your transaction is dead.
That’s not how it works. The transaction as a whole is sent to Mongo for execution server side; the client isn’t manually controlling transaction execution.
Granted it's only writing to some collections but I assumed you can read from the session during a transaction.
> The transaction as a whole is sent to Mongo for execution server side; the client isn’t manually controlling transaction execution.
Are you saying that transactions are serialized as a series of pure updates and sent to the server as such? i.e you can't read a value, use it for some logic, update some other values, repeat ..., then commit? If that's the case this would be better labeled as "Multi-document atomic updates" as (to me) transaction implies interaction with the data in app code.
> In the example on the docs page it looks like the logic is happening in the app code
Not quite. The code in the docs you linked to handle what happens when a transaction does not complete server-side -- typically, you want to re-try the entire transaction a few times in case transient locks have been released or preconditions met. It does not suggest that the transaction is being controlled/orchestrated by the client.
> Are you saying that transactions are serialized as a series of pure updates and sent to the server as such?
Yes, generally. Check out examples of Postgres transactions -- they are plain-text "queries" that are executed with all-or-nothing semantics.
> (to me) transaction implies interaction with the data in app code.
Transactions, generally, are groups of statements/queries that are either all applied or none at all. They do not imply interaction with the data in app code, unless the app code itself is executed as part of the transaction itself (e.g., UDFs or stored procedures). They are like mini-programs that are shipped to the DB to be executed in a concurrency controlled and undo-able environment.
Generally a footgun is something that's designed in such a way as to be extremely likely to be used in a way that's going to cause problems for yourself.
Yes it's a play on shooting yourself in the foot, the idea is that you designed a gun designed to shoot yourself in the foot.
It's talking about shooting yourself in the foot, but doesn't blame the victim. For me, it's one of the better cultural changes in software development in the last five years.
It doesn't get more web-scale than `/dev/null`. Unlimited throughout, zero latency, full availability and consistency even in the presence of partitions, and constant network usage for any number of nodes!
There's a chance of data loss, of course, but the write performance is so good that it's a perfect fit for workloads like analytics or logging.
MongoDB represents to me a very good way to build a product. There has always been so much derisive criticism about MongoDB opting to prioritize convenience of customer workflows above all else, and to go back and add best practices, basic data safety, etc., on a piecemeal basis after the fact. Customers are surprisingly willing to put up with problems as long as usability and user experience is high, and they will wait for other features. Meanwhile, plenty of other database projects may start out with a more deliberate focus on classical database safety and guarantees, yet hardly build any customer base.
Even though I may like e.g. Postgres features more, there is still something to be respected about how MongoDB has operated, and the constant vitriol about their chosen priorities has always sounded hollow to me, even accounting for stories about data loss, etc.
Incidentally, I once had the chance to tour the MongoDB office near Times Square, and boy, I can tell you it is not an office environment for me. Extremely loud, and they even have things like scooter parking slots and signs for “scooter etiquette” for rolling around the office on a scooter.
I’m not sure how they are able to focus on any engineering work, but kudos to them for finding a way.
There are use cases for it, just as there are use cases for any type of database. The biggest source of problems that so many of us ran into, was using MongoDB as the Swiss army knife for projects. At the end of the day, most data is relational, and should be put in a relational database. Otherwise, you'll end up running into roadblocks, and inefficient work arounds shoehorning in relational operations.
That's the point. Using relational data in a non-relational system is just foolish, and introduces issues not worth dealing with. What exactly is MongoDB going to give you over PostgreSQL in that case?
Honestly, spinning up a SQL server really isn't rocket science. SQL is easier to read and write than that messy mongo query language. Though, if you're used to taking the quick and dirty route for everything, you're probably using ORMs anyhow. But why do things twice? Just do it right the first time.
There was perhaps even still is the idea out there to shave off some engineering time by ditching established principles engineers followed for the past 50 years.
Personally I think the idea for MongoDB is not that bad. It's just not one that works for business scenarios, where traditional principles are mandatory.
There are some things I really like about it. The pattern-matching-esque syntax for queries, for example, is quite neat. The ability to do certain type of data munging, the easy use of JS to handle certain things, all good.
But, the bread-and-butter business stuff has continually left me disappointed and sometimes working way harder than needed.
This seems great, but I think until 4.2 they don't plan to have global point-in-time consistency - just per replica set. I wonder how this affects ACID semantics?
Also: This is going to be really nice, but I sure hope a major cloud provider starts providing a managed service. It's very nice having a managed service like Amazon RDS or Google Cloud SQL.
MongoDB Atlas is a cloud-hosted MongoDB service engineered and run by the same team that builds the database. It incorporates operational best practices we’ve learned from optimizing thousands of deployments across startups and the Fortune 100. Build on MongoDB Atlas with confidence, knowing you no longer need to worry about database management, setup and configuration, software patching, monitoring, backups, or operating a reliable, distributed database cluster.
Our team has been using both shared and dedicated plans at mLab https://www.mlab.com for almost a year now and we’re very happy with the ease of use and we’ve had zero problems so far.
Let's be honest, lacking behind a few months, especially if it is a major release, isn't a bad idea.
With Rails apps in production I keep upto a year of distance to the major releases because there's so many rubygems that need to catch up or be replaced.
That's probably true. I think it took mLab a couple of months to support 3.6. But then again, having a major version release available at launch doesn't seem that important to me compared to all other database-as-a-service features.
Since you mentioned you work for MongoDB, if you guys could partner with Heroku and add Atlas to their official add-ons our team might be able to take a look and switch ;)
I actually witnessed that once when a customer (and mongodb user) had some man power from an agency allocated and one of those guys mentioned RDS. I wasn't impressed.
Well, I use SQL databases and MongoDB a lot, sometimes even in the same exact project. It turns out that while most data can be looked at as either relational or document-based with enough contortion, some of it leans fairly strongly in one direction.
Good bye MMAPv1. I like the changes about the date formatting and type conversions. In my current project I shortly had values store both as bigdecimal and float until I moved the calculations app-wards into Ruby.
the biggest use case (and it's a bad one), is because a stack (MERN/MEAN) and all tutorials for said 'stack' use mongo, or a framework like Meteor... it sort of locks devs into bad practices when being a little more picky, a little more curious about options could make for a better architecture.
It's not like you couldn't rip out mongo in MERN and use PERN or MyERN (Mysql Express React Node). There's some good libs/packages for using relational db's, and the benefits may outweigh those of Mongo.
I guess one other use case maybe would be an incremental/idle game, where all data is just stored as one big json doc, and you just need to connect/update totals, then sync that data back and forth, with not a lot of relationship/connections or transactional data.
For mongo, with existing features or the new transaction feature, is it possible to access the ids that are generated during the update, to use on subsequent updates, as references, without needing to return to the dB client to process or build objects? Could be across collections also
This comes at a perfect time for me, because I've been working on an application running on MongoDB and although I can get away without transactions, they would help significantly.
PostgreSQL with JSONB columns seems to have beat them to the race by quite a wide margin. MySQL too, for that matter.