Hacker News new | past | comments | ask | show | jobs | submit login

What history rewriting?

I have been using RDMS all along since 1996.

Nokia NetAct was scaling GB of data across multiple clusters with OLAP reporting engine in 2005 with no hiccups.

The experience I had with DynamoDB kind of proved to me I haven't lost anything by staying in the SQL path.

Most NoSQL deployments I have seen, could have been easily done in Oracle or SQL Server, provided they actually had a DBA on the team.




But that’s sort of the point. You are saying “if you have a DBA and use proprietary products and have the skill to understand the trade offs of running a db with another data warehouse product layered in you can handle GB of data”.

Mongo said “Throw data at me and I’ll scale with very little work”.

Now, I’ve always largely believed that’s penny wise and pound foolish but it’s certainly a good pitch.


"mongo said" - the obligatory mongo DB is webscale video:

https://m.youtube.com/watch?v=b2F-DItXtZs


Which is what I originally stated "boil down to not learning how powerful SQL and its programming extensions are".

One can only have bought into Mongo's story, by lacking the skills to understand how fake it was.


DynamoDB was designed to solve a very different problem than a traditional SQL database. When DynamoDB (and the other Cassandra flavors) were released there were no databases doing multi-master failover with high write throughput - we are talking about TBs, not GBs. It's not a coincidence that Google, Facebook, and Amazon all had to write to their own database at around the same time (BigTable, Cassandra, Dynamo).

With those new tools you had other companies building on top of those databases for far cheaper than a license to MSSQL or any other OLAP of choice would give you.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: