Hacker News new | past | comments | ask | show | jobs | submit login

I should have been more clear in the article. There are two different ways SQL is not composable.

1) predicates can be composed, but only by resorting to string manipulation (i.e. str-join([a,b,c], " AND "), which sucks.

2) Views and subselects CAN compose, but only a few levels. Every DB I've ever seen has huge performance problems with just a few levels of nesting. Imagine your programming language only allowing 3-4 stack frames before it just dies. That's not acceptable in your programming language, so why is it acceptable in your database?

    It's not obvious which queries are slow
Any decently experienced programmer can determine the correct big-O complexity just by reading the code. With databases, you have to read the query, and understand what optimizations the current version of your DB is capable of making.

    No back door for map, filter, reduce
The reason I want that feature is for the case where the DB pukes on your query. What happens when your database is not capable of generating an efficient query for the data you want? Pulling everything into RAM isn't an acceptable solution either. When the DB pukes, you should be able to say "fine, let me give you the exact plan". Then I expect the DB to go off and execute that plan, and give me a stream of data using limit, offset, etc.

The DB allows you to define custom predicates. It doesn't solve the problems of needing to do string manipulation to piece together a query, and it doesn't solve the performance problems of nesting views and subselects.

I never compared SQL to MapReduce. MapReduce is a project created by Google. map, reduce are functions for working on sequences of data created decades before Google was founded.




What happens when your database is not capable of generating an efficient query for the data you want?

Then you use hints or a stored outline/plan stability. This feature has been in Oracle since version 8 over a decade ago.

Clue: MySQL is a looonng way behind the state of the art, don't base your opinions of RDBMS technology on it.


Off topic, I wonder if it's a subjective bias among hardware producers against large amounts of RAM. Cost-wise, I don't think there is a problem to have multi-tera servers or desktops with tens or hundreds of gigs. And yet, you can't buy them. Even though as a software model it makes a lot more sense to keep all the data in ram and only ship the changes to an external memory.


Well, you can buy dedicated devices with lots of RAM (look up violin memory). But, cost-wise, they are quite expensive.


I'm a bit annoyed they don't offer prices up-front, but a very nice product. I wonder if you can use a violin device as primary memory or you can only mount it as a fs.


I think they are mostly used as external storage, sometimes backed by spindles as a very fast SAN.

If you need huge amounts of local RAM then I think you're in the realm of the IBM zSeries and other mainframe devices. Nice stuff indeed, when you have the spare change...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: