Hacker News new | past | comments | ask | show | jobs | submit login

One thing to keep in mind which I found using Postgrest interface: you will end up having to put logic into stored procedures. The rest APIs are actually very convenient for aggregating data like joins, but I started to get stuck as soon as I wanted things like transactions. I also found that Row Level Security (RLS) for role based access was a chore and the developer experience of it left much to be desired.

If your DB needs are simple then the REST api is very convenient. But if you are planning anything of complexity then you'll have to bone up on your PL/pgsql or go for a regular db connection instead.




I'm really curious about more of your perspective on RLS: I spent most of my career on mobile and rely heavily on Supabase to give me server superpowers. RLS _seems_ really cool to me (just write a one liner to define access rules as simple as complex as you need!), but I'm guessing I'm missing something. Especially because I don't actually have users yet ;)


About the "as complex as you need": RLS can get slow very quickly for aggregate queries, and is hard to debug (since query planner doesn't work smoothly with RLS).

We have a dashboard that displays aggregated stats for our admin users, and we hit serious performance issues with ~600 users with our first implementation. This repo helped us: https://github.com/GaryAustin1/RLS-Performance


Thank you, that was really helpful and actionable: ex. I had stopped writing filters on queries recently if my RLS had it "built-in", easy to see now it's better for performance, and since it's better for safety anyway, why not do it?


Have a look at the supabase-community claims repo [1]. NOTE: this is for ACL type permissions where you want to provide granular access to a wide range of services.

This stuff is "really cool" but just keep in mind that it is pretty advanced. And exactly as another commenter noted in this thread, it is possible to destroy your performance if you need to join on other tables in an extended version of this kind of RLS policy.

In this repo, the logic is simply "if a claim exists on the JWT then grant access". But in a lot of cases you may want to do something like "if this user is an owner of <row in another table> then grant access". That can require a join to that other table. That logic can get even more complex, for example, you might want to say "allow the user access to this row if they are an owner of the project". So you have to do more work to join from a child table, to a project table, to the user table, etc.

These operations are in addition to any work you might be doing in the actual query that is executed. I have no idea if the query planner can recognize you are doing the same joins in the RLS as the main query and optimize that away. But at any rate, every single policy invocation (on every single query) will be executing this logic.

These are all considerations if you are planning more advanced access policies to your data. If all you need is a binary "can access"/"cannot access" then basic RLS policies may be fine. But once you get into even moderately complex scenarios your policies are likely to balloon in complexity and you'll be writing a fair amount of PL/pgsql and fighting with testing and validating.

1.https://github.com/supabase-community/supabase-custom-claims...


Aren't these extended queries with joins simply a function of the granular access rules you are trying to enforce? In other words, the downside of RLS that you mention is that it can destroy performance if the rules get complex. But the upside is that those complex rules are available. So there is no "true" downside, since the downside that is mentioned is only a feature of using the upside. One can presumably use RLS in a way that does not make use of all the advanced features it offers, but also does not destroy performance.


Consider you have to do joins in the RLS for policy enforcement and you also have to do the same joins in your query for the business logic. One question is whether or not the query optimizer can recognize that the joins you do in the RLS are the same joins you are doing in your query, or if it will perform the subqueries multiple times.

The bigger downside, IMO, is the dev experience. They are hard to debug and to test their performance. Of course, everyone has a different bar for what they consider "hard", but if I start getting back result sets from the db that don't match my expectations, or if the performance isn't what I expect, I have to track down if the culprit is my query, the RLS or some combination of those. And while I am pretty confident with SQL, I am not so confident in PL/pgsql - which was the point of my original comment. You will have to get confident in that if you go down this route. You'll have to learn what amounts to a complete language but you won't have logging, a debugger, etc. just a rudimentary set of tools.

I'm not telling people not to do it - just warning them that the path passes through some thorny territory and you may get scratched up. It isn't all roses.


The business logic has to live somewhere, and it makes sense to me to start thinking of everything in terms of PL/pgSQL if you're building with Supabase, which I'm thinking of doing.

I suppose I would like to have the problem of having so many users that my app begins to get bogged down, so I'll cross that bridge when I come to it.

The RLS stuff seems like a bonus that I can choose to use if I want to, and I was happy not having it before, so I'm planning to be careful about actually activating it in any projects going forward -- it would have to be a use-case that I thoroughly understand.


What about plv8? Write js in postgres! I tried it out once for a project of relatively low complexity for maintainability reasons, nobody else knew pl/pgsql. Worked great.


I'm sure it works fine, its just another thing to add to your stack (in some sense). Just a few days ago I saw a comment where a business owner was bragging about how his entire business was run on SQL stored procedures. He had made the technical decision to move all business logic into the database using triggers and stored procedures. That is certainly an option. Otherwise, you end up with a mix of business logic between your code and your database. This can cause confusion and can lead to hard-to-debug systems.

In that sense, if you are like the business owner who swears by SQL and making the database the core business-logic layer of your system, then you might even appreciate that Postgrest forces you do move that kind of logic into the database. It is just something to be aware of before you make the decision so you that you aren't surprised when it happens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: