It's not bad, and Apache Spark adds SQL on top of some other nosql DBs, but they are still hard to use.
Cassandra for example: you can't just put a secondary index on a table to enable sorting on it. There are secondary indexes, but they aren't useful for that (they don't perform well on high cardinality data).
Want a different sort? Make a new table. And duplicate the data, since there are no JOINs.
This is because the top priority for Cassandra is being scalable and always-on in the first place. New features are added only if they meet these goals. Therefore you have guarantee it will always scale and it has no single point of failure, not even a temporary one, regardless of which feature you use.
Some other new database engines do it in different order - first adding as many RDBMS features as possible and then making some of them kinda scalable (in some circumstances, if the phase of the moon is right, etc.). Sure, that might feel less cumbersome at the beginning on a single laptop (feels like good old RDBMS), but once you scale it out, it is far from being all roses.
BTW, duplicating (denormalizing) data is the way to go if you really need scalability. In general case, joins or traditional indexes with references from the index leaves to the original data do not scale on distributed data sets.
Cassandra for example: you can't just put a secondary index on a table to enable sorting on it. There are secondary indexes, but they aren't useful for that (they don't perform well on high cardinality data).
Want a different sort? Make a new table. And duplicate the data, since there are no JOINs.
I find this really cumbersome.