This is because the top priority for Cassandra is being scalable and always-on in the first place. New features are added only if they meet these goals. Therefore you have guarantee it will always scale and it has no single point of failure, not even a temporary one, regardless of which feature you use.
Some other new database engines do it in different order - first adding as many RDBMS features as possible and then making some of them kinda scalable (in some circumstances, if the phase of the moon is right, etc.). Sure, that might feel less cumbersome at the beginning on a single laptop (feels like good old RDBMS), but once you scale it out, it is far from being all roses.
BTW, duplicating (denormalizing) data is the way to go if you really need scalability. In general case, joins or traditional indexes with references from the index leaves to the original data do not scale on distributed data sets.
Some other new database engines do it in different order - first adding as many RDBMS features as possible and then making some of them kinda scalable (in some circumstances, if the phase of the moon is right, etc.). Sure, that might feel less cumbersome at the beginning on a single laptop (feels like good old RDBMS), but once you scale it out, it is far from being all roses.
BTW, duplicating (denormalizing) data is the way to go if you really need scalability. In general case, joins or traditional indexes with references from the index leaves to the original data do not scale on distributed data sets.