One might argue one approach is superior over the other, I'd argue they are more like duals of one another. The PolyScale approach analyzes the queries and identifies the semantic and the statistical relationships between reads and writes. The Noria approach forgoes analyzing the queries and instead maintains a materialized view like representation of where the data should-be-at-now.
The PolyScale approach does not maintain / require a separate data representation and so saves space, but on the other hand, precisely identifying the relationship between reads and writes is not possible and so the PolyScale approach must sometimes over-invalidate in the interest of accuracy.
There are scenarios in which show-me-the-data (Noria) beats show-me-the-math (PolyScale), for example, running complex queries against a relatively simple schema. There are also scenarios in which the statistical (PolyScale) approach wins, for example if the queries are relatively simple or if not all writes to the underlying data are visible.
There are additional unique features of PolyScale that set it apart. Full disclosure, I work at PolyScale.