A separate policy language is explicitly useful for those that want to be able to reuse policies in programs written in different languages. It's a part of the best practice (for larger orgs/products) for decoupling authorization logic and data from your application codebases.
When you're just hacking something together, you're totally right, it might as well be Rust!
That’s fair. Another pro is the flexibility that comes from being able to store policies in a database and manage them as data instead of code. E.G. roll your own IAM.
A good problem to solve when you need to, but for many of my projects, which admittedly don’t grow into big organizations, I find myself valuing the simplicity of the reduced toolkit.
This project looks like a very nice lightweight way to implement policy in a Rust application; I really like the ergonomics of the builder. Despite being very different systems, the core permissions check being the same signature as a call to SpiceDB[0] (e.g. the subject, action, resource, and context) shows the beauty of the authorization problem-domain regardless of the implementation.
I would like to add some color that a policy engine is not all you need to implement authorization for your applications. Without data, there's nothing for a policy engine to execute a policy against and not all data is going to be conveniently in the request context to pass along. I'd like to see more policy engines take stances on how their users should get that data to their applications to improve the DX. Without doing so, you get the OPA[1] ecosystem where there are bunch of implementations filling the gap as an afterthought, which is great, but doesn't give a first-class experience.
Agreed! But you're glossing over the Zanzibar point of view on this topic, which falls back to dual-writes. That approach has a lot of downsides: "Unfortunately, when making writes to multiple systems, there are no easy answers."[0]
Having spoken with the actual creators of Zanzibar, they lament the massive challenge this design presents and the heroics they undertook over 7+ years at Google to overcome them.
By contrast, we're seeing lots of the best tech companies opt for approaches that let them leave the data in their source database wherever and as much as possible [1]
Yeah a big motivation for us was avoiding the need to keep another system up to date. Gatehouse basically sits at the execute policy layer, and we let the application code decide how to unify the data (or not).
Oso local authorization looks like a fantastic solution.
Thanks for the response. In my opinion, oso does the best of job of any policy engine at prescribing how input data to their system is fed (and your linked blog demonstrates it well!).
I do think you might have pivoted the conversation, though. My post was purely about federation strategies and policy engines, but you appear to discussing consistency and Zanzibar, which is only tangentially related. Federation and consistency aren't necessarily coupled. Oso also would require a complex scheme for achieving strict serializability, but it instead chooses to trade-off consistency of the centralized data in favor for the local data.
AuthZed Cloud is fully-managed, database-as-a-service platform built to be the foundation for authorization across product suites and oceans. At the core of AuthZed Cloud is SpiceDB, the premier open source database designed specifically to store and query access control data.
For years, there have been libraries to help developers build authorization systems, but that hasn't stopped broken access control from becoming a substantial threat to the internet. The core hypothesis behind SpiceDB is that the best foundation for authorization systems is one that is centralized rather implemented ad-hoc in each application. By providing a system that is designed to be ran by a platform team centrally for an entire organization, developers can standardize workflows, testing flows, and how to interoperate between their applications.
"Auth" is also super overloaded. OP is an authentication or AuthN tool which is not the same nor does it encompass authorization or AuthZ. I'm partial to using the terms "identity" and "permissions" instead.
The example in the video even has the name `authorizer` in the default export, but really it's an `authenticator` or `identifyer` or `isThisUserWhoTheySayTheyArer` – not a `shouldTheyHaveAccessizer`.
I'm in total agreement with you but I guess the AuthN/AuthZ train left the station ages ago and no one outside of the business of selling these tools actually care. Oh well.. :o)
This is a great comparison and a great step towards pressure to improve cloud service pricing.
The magic that moves the region sounds like a dealbreaker for any use cases that aren't public, internet-facing. I use $CLOUD_PROVIDER because I can be in the same regions as customers and know the latency will (for the most part) remain consistent. Has anyone measured latencies from R2 -> AWS/GCP/Azure regions similar to this[0]?
Also does anyone know if the R2 supports the CAS operations that so many people are hyped about right now?
This really is a good article. My only issue is that it pretends that the only competition is between Cloudflare and AWS. There are several other low rent storage providers that offer an S3 compatible API. It's also worth looking at Backblaze and Wasabi, for instance. But I don't want to take anything away from this article.
>It's subtle but I also get the impression most gRPC stubs are miserably bad, that Authzed had to go long and far to get away from a lot of gRPC tarpits.
They aren't terrible, but they also aren't a user experience you want to deliver directly to your customers.
The Go is the more mature implementation; it's generally a lot easier to refactor Go as you're figuring things out and then can build the Rust version (which is a good bit faster)
Sharing e2e test suites (realistically, two different test binaries to run at CI time) is something I'm cleaning up right now.
When you're just hacking something together, you're totally right, it might as well be Rust!