As for the rest of the article, it feels like a basic Data Warehousing 101 re-discovered. It should have been titled "Analytics: Back To The Future" :-)
No kidding. The amount of startups that have flocked to hadoop for "data analytics" over the past 5 years is extremely disheartening. Almost all of the cases are far more suitable for any off-the-shelf RDBMS much less a column-oriented one. Same thing with MongoDB.
How much time and money would have been saved learning Database Theory/SQL/Data Warehousing/Dimensional Modeling instead of cramming everything into an unstructured data-store?
I think part of the issue why so many people have gone with Hive is that good, production-ready column stores are expensive. Redshift is posed to change that. If you're shopping in this space, Infobright is also worth checking out.
And even for moderate data sizes (10+ GB per table), row store DBs tend to become painful. This is especially true when you need to support ad-hoc reporting queries, since the usual technique of matching your schema, indexes, and queries won't be effective any more. With true ad-hoc reporting, your only hope becomes lots of shallow indices rather than ones tuned to a particular query.
Dimensional modeling (I'm a fan of Kimball's approach) mitigates these problems quite well while still offering very flexible ad-hoc reporting. Works great on a row-based RDBMS, even better on columnar.
Redshift is indeed a solid product but all these comparisons against Hive are surprising, as that's not the right tool in the first place. Infobright, greenplum, aster, vertica, etc are the products which Redshift seeks to disrupt.
I realised a few years ago that pretty much every database course taught only teaches OLTP. OLAP never really gets a lookin.
At my university, standard normalisation was taught in the "databases" course. OLAP was mentioned as part of the "advanced databases" course.
The database course at that time blew about half its time on building PHP applications to talk to the database. I hate to second guess my professors, but I can't help but feel that a more productive use of the time would have been to teach normalised OLTP in the first half, and dimensionally modelled OLAP in the second half. Better yet, to divide them into two courses and spend some time talking about database history ("here's why network and hierarchical databases sucked") and maybe some introduction to how query planners work.
Well ... really, just read a good pair of textbooks on each side of the spectrum. Date's Databases and Kimball's The Data Warehouse Toolkit are good.
Edit: actually, maybe not Date. It's up to you. It's good, but it's controversial because he's not a fan of SQL and so he uses his own language.
The one I used in uni was Ramakrishnan & Gehrke's Database Management. It was OK but there's a certain amount of at-the-time trendy bullshit that to me detracts from a focus on relational databases for their own sake.
Edit 2: and Joe Celko's SQL for Smarties contains good oil on the relational paradigm.
One thing to be aware about with both is the lack of any support for wide tables - Infobright inherits MySQL's limit of 65,535 bytes per row (and UTF8 means 3 bytes per char); with Redshift you can stored wider rows but you can't query them (http://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE...). Obviously not a deal breaker, but it locks you firmly in the densely populated rows, relational mindset.
Hi, I have started your project previously and while I haven't had a chance to test it out, I must say the idea of using cloud-front as a collector is a superb brilliant idea to scale an analytics platform and would be both very scalable, reliable and economical.
Btw, do you have more experience to share? e.g. with infobright, how many events can be processed per second? what would be the "ETL latency"? can infobright handle 10TB of data easily, any caveat besides the row limit? Thanks.
In 2007 I worked for a firm with a 4 billion row join table in PostgreSQL. Might've been 7 or 8, I don't recall which. It ran on a quad core server with 16Gb of RAM. Joins going through this table took about 2-3 seconds to complete.
But I suspect the join must have been over an indexed column, so it did not touched 4bln rows, otherwise 2-3 seconds would be hard to believe. The group by query in the article must access all 3bln rows, which makes a huge difference.
I remember it well, because I was trying to explain why having tens of gigabytes of indexes wouldn't help them much if they only had 16Gb of RAM.
In terms of group-by performance, it depends a lot on the kind of data and how it's stored. For example, taking a sum on a columnar store is quite amenable to parallel solutions and a lot of databases will do that way.
I can't think of an off the shelf RDBMS which can't handle queries on 3 billion rows.
SQL Server can
Oracle can
Postgres can
Even MySQL can (!)
The limitations are almost always in the hardware, not the software.
If you're looking at column based systems, you can look at Greenplum (does both row and column-based storage), InfiniDB (MySQL based), and all sorts of expensive but very fast appliance options like Netezza, Teradata, etc.
When you log every mousedown because the founder misunderstands A/B testing, 3 billion rows is easy to come by. Besides - you're busy changing the world, so you should expect to use the same technology as Facebook and Google.
It's so exhausting to hear how much smarter you are and if we just educated ourselves we would realise the error of our ways. People who choose the technologies aren't stupid or masochistic. They understand their use case and the fact is that there are plenty of situations where SQL is suboptimal.
For a basic overview: http://en.wikipedia.org/wiki/Paraccel
As for the rest of the article, it feels like a basic Data Warehousing 101 re-discovered. It should have been titled "Analytics: Back To The Future" :-)