But that’s sort of the point. You are saying “if you have a DBA and use proprietary products and have the skill to understand the trade offs of running a db with another data warehouse product layered in you can handle GB of data”.
Mongo said “Throw data at me and I’ll scale with very little work”.
Now, I’ve always largely believed that’s penny wise and pound foolish but it’s certainly a good pitch.
DynamoDB was designed to solve a very different problem than a traditional SQL database. When DynamoDB (and the other Cassandra flavors) were released there were no databases doing multi-master failover with high write throughput - we are talking about TBs, not GBs. It's not a coincidence that Google, Facebook, and Amazon all had to write to their own database at around the same time (BigTable, Cassandra, Dynamo).
With those new tools you had other companies building on top of those databases for far cheaper than a license to MSSQL or any other OLAP of choice would give you.
I have been using RDMS all along since 1996.
Nokia NetAct was scaling GB of data across multiple clusters with OLAP reporting engine in 2005 with no hiccups.
The experience I had with DynamoDB kind of proved to me I haven't lost anything by staying in the SQL path.
Most NoSQL deployments I have seen, could have been easily done in Oracle or SQL Server, provided they actually had a DBA on the team.