Every project on https://supabase.io uses PostgREST. It's an amazing tool, especially when combined with Row Level Security.
We have built client libraries for Javascript, Rust, Python, Dart, and a few more on the way thanks to all the community maintainers.
Supabase employs the PostgREST maintainer (Steve), who is an amazing guy. He works full-time on PostgREST, and there are a few cool things coming including better JSONB and PostGIS support.
We still have a few benchmarks to complete, but PostgREST has been thoroughly tested now. Steve just spent the past month improving throughput on PostgREST, with amazing results (50% increase).
tldr: for simple GET requests on a t3a.nano you can get up to ~2200 requests/s with db-pooling.
Thanks! It's amazing what you can get done with optimization vs horizontal scaling.
What would you say are the current failure modes? Say for a t3a.nano, what combination of payload size/queue length/rps/other parameters would absolutely mandate an upgrade in capability?
I couldn't tell you with a reasonable degree of unfortunately. We will do some stress testing at some point, but we haven't started them yet.
We see few 500 errors across all of the PostgREST requests at Supabase. The errors that I can remember are from users doing `select=*` on tables which would return hundreds of MB of data. If you're thinking of adopting it then we'd be happy to help figure out if it's the right tool for your use-case.
I have been using Postgraphile (similar to this one but for GraphQL) for many months. The great thing about this way to create systems is that you don't expend time doing glue code, just in the real business logic.
But a big pain is to maintain your database code, by example the version control of your functions. There are not suitable linters, and testing can't be done over postgres functions but must be done over GraphQL instead.
Using things like this will save you months of development time!, Even if I agree there are some systems that will not be good idea to implement in this way.
Specifically, testing the sql side with sql (pgtap) and having your sql code in files that get autoloaded to your dev db when saved. Migrations are autogenerated* with apgdiff and managed with sqitch. The process has it’s rough edges but it makes developing with this kind of stack much easier
* you still have to review the generated migration file and make small adjustments but it allows you to work with many entities at a time in your schema without having to remember to reflect the change in a specific migration file
I’ve used pgtap. It works but it’s not awesome if your coming from something like rspec. So... I just used rspec to test my functions instead (pgtap let’s you easily test types and signatures too, but I was more interested in unit testing function results given specific inputs).
I’m sure you could argue either way. Just adding this as an option to consider.
> Using things like this will save you months of development time!
When you need to integrate with 3rd parties though you're right back to writing traditional backend code. So now you have extra dependencies and a splintered code base.
Yes, this automates boilerplate which is awesome for small, standalone apps, but in my experience I haven't seen months of development time saved with these tools.
You can also use NOTIFY in your postgrest stored procedure to wake up a hypothetical backend processor with low latency. I really love this project! I recommend it to people all the time.
Interesting, so how does that fit into the model definition/admin? How to do migrations for example? Have to do it all in sql, I’m guessing. May not be a win in the long run.
As awb points out: you now have a splintered codebase.
I'd add that you also have hard to spec couplings and difficult to manage microservices setup.
Tools like MQTT or paradigms like eventsourcing might help. But those all presume your database is a datastore. And not the heart of your businesslogic.
This isn't really true, in most cases you can write FDWs over foreign resources that let you query them like any other table, use them in joins, etc. Postgres is really more of an application platform than a database at this point. Just don't try to have the same PG instance be your master and your platform.
You might want to check Forest Admin in that case. It creates an extensible admin REST API on top of your Postgres.
Comes with native CRUD routes for all your tables but you can also declare your own to refund a transaction using Stripe's API, send an email using Twilio's API...
> When you need to integrate with 3rd parties though you're right back to writing traditional backend code. So now you have extra dependencies and a splintered code base.
Those third parties can still talk to the same database. We use this pattern all the time, PostgREST to serve the API and a whole bag of tools in various languages that work behind the scenes with their native postgres client tooling.
Regarding linters, check plpgsql_check[1]. Also, as snuxoll mentioned, for tests there's pgTAP. Here's a sample project[2] that has some tests using it.
> But a big pain is to maintain your database code, by example the version control of your functions.
The solution I came with is to have a git repository in which each schema is represented by a directory and each table, view, function, etc… by a .sql file containing its DDL. Every time I make a change in the database I make the change in the repository. It doesn't automate anything, it doesn't save me time in the process of modifying the database, it's actually a lot of extra work, but I think it's worth it. If I want to know when, who, why and how a table has been modified over the last 2 years, I just check the logs, commits dates, authors and messages, and display the diffs If I want to see exactly what changed.
Liquibase supports plain sql files (just needs one comment for author at the start) and custom folder structures, so if you actually want to take it one step further, do check it out ;)
It could not only be great for documentation purposes but also actually help maintenance by making sure that all statements are executed in all environments
On one of my oldest postgreSQL project which I usually edit on live database I also added version control in the form of a one liner pg_dump call that backup --schema-only.
This way whenever I make a change I call this one liner, overwrite previous sql dump in git repo, then stage all and commit.
This way il also enjoy diff over time for my schemas.
I don't use postgraphile myself, but as the author of a schema diff tool, I know a lot of people use schema diff tools to manage function changes automatically, and seems to work pretty well for them.
I've looked quite extensively at Postgraphile and the extensive dependency on database functions and sql is an issue. Really hard to write tests and SQL itself is not the greatest programming language. The whole setup lacks so many of the affordances of modern environments.
> SQL itself is not the greatest programming language
SQL has been the biggest flaw in this stack for me. I love using PostgREST/Postgraphile et al, but actually writing the SQL is just... eh. Maybe (lets hope) EdgeDB's EdgeQL or something similar could rectify this. The same Postgres core database and introspection for Postg{REST,raphile} but with a much improved devx
I was wondering of the V8 engine integration would be something to play with, but think ot just adds inefficiency and doesn't really fix some of the core problems.
Not enormously. The biggest difference is Hasuras console, which Postgraphile lacks. However I don't really see that as an advantage in favour of Hasura, postgraphile + graphile-worker + graphile-migrate (all by the same author) has worked so much better for me than Hasura.
One thing is that it's open source, not proprietary. You can run it on your own servers, modify it, etc, for free.
Another thing is it's Node, and can be used as a plug-in to your express app. This can make it easier to customize and extend over time. Eg; have parts of your schema be postgraphile and parts be custom JS.
My apologies! I'm not sure why I misremembered that. Perhaps I was confusing it with Prisma... Which appears to be open-source as well now. Hmm, my mind must be playing tricks on me.
Biggest difference I think is that Hasura doesn't use RLS for security. It has it's own privileges/roles implementation. Postgraphile kinda works in postgresql, hasura works with postgresql.
PostgREST is great and really reduces the boilerplate around building REST APIs. They recommend implementing access rules and extra business logic using PostgreSQL-native features (row level security, functions etc) but once you get your head around that, it really speeds things up!
If you're interested in playing around with a PostgREST-backed API, we run a fork of PostgREST internally at Splitgraph to generate read-only APIs for every dataset on the platform. It's OpenAPI-compatible too, so you get code and UI generators out of the box (example [0]):
We don't actually have a massive PostgreSQL instance with all the datasets: we store them in object storage using cstore_fdw. In addition, we can have multiple versions of the same dataset. Basically, when a REST query comes in, we build a schema made out of "shim" tables powered by our FDW [1] that dynamically loads table regions from object storage and point the PostgREST instance to that schema at runtime.
When we were writing this, PostgREST didn't support working against multiple schemas (I think it does now but it still only does introspection once at startup), so we made a change to PostgREST code to treat the first part of the HTTP route as the schema and make it lazily crawl the new schema on demand.
Also, at startup, PostgREST introspects the whole database to find, besides tables and their schemas, also FK relations between tables. This is so that you can grab an entity and other entities related to it by FK with a single query [2]. In our case, we might have thousands of these "shim" tables in a database, pointing to actual datasets, so this introspection takes a lot of time (IIRC it does a giant join involving pg_class, pg_attribute and pg_constraint?). We don't support FK constraints between different Splitgraph datasets anyway, so we removed that code in our fork for now.
There's always this confusion that comes up whenever PostgREST is discussed on HN, which is many times at this point. There is the misconception that arises that you use PostgREST to remove your existing stack entirely and you do everything in SQL. This is not true, you're not going to send emails from SQL or run entire multi-GB ETL reductions during an http request.
If you want an API call to kick off some external processing, then insert that job into a queue table and do the same thing you always did before, consume the queue out of band and run whatever process you want.
Another one that comes up is that somehow postgrest is "insecure". Of course, if you invert the problem, you see that postgrest is actually the most secure because it uses postgres' native security system to enforce access. That access is enforced onto your API, and you know what, it's enforced on every other client to your DB as well. That's a security unification right there. That's more secure.
What PostgREST does is let you stop spending months of time shuttling little spoonfuls of data back and forth from your tables to your frontend. It's all boilerplate, install it in a day, get used to it, and move onto the all those other interesting, possibly-out-of-band, tasks that you can't get to because the API work is always such a boring lift.
I am clearly missing something, but at that point, why not just allow the client (presumably a web app running in a browser in most cases) to just make requests via SQL? You would of course want to add authentication/authorization layers. You’re want record and field level restriction. But if the entirety of you data is in an RDMS, why put this “REST” paradigm in the middle? Why not just admit you’re querying a database, and query the database? If the only reason is access control, it seems a simpler solution must be possible.
> I am clearly missing something, but at that point, why not just allow the client (presumably a web app running in a browser in most cases) to just make requests via SQL?
Because REST-over-HTTP is low impedance with browser-based web apps, whereas SQL is...not.
Plus, with REST, you abstract whether all, some, or none of your data is an RDBMS; the fact that you've implemented something with PostgREST doesn't mean everything related to it and linked from it is implemented the same way.
It's difficult to grant an untrusted client a level of raw SQL access that won't let them do harm.
For example, even without access to any specific table or function, even with rate limits, I can denial-of-service the server by asking it to make a massive recursive pure computation.
I often thought the same with the use of GraphQL. Instead of building a highly curated API with business logic, optimisations and other general stuff you are building a direct interface around how your data looks in the db, a 1:1 mapping and crud interface. In software we are taught to prefer loose coupling vs tight coupling yet here people are preferring the tight coupling.
I’m not a fan of this as the user interface (API) has a tight coupling with how you store your data. Then like you say, why not just speak SQL as you have all the same issues, essentially multiple clients writing to/owning the same tables.
The PostgREST documentation makes it pretty clear that for decoupling and security reasons the data schema should be separated from the API schema. You should only provide the client with views and functions and keep all data in a separate schema. This also solves the versioning problem, because if your API schema is separate from your data, you can simply drop and re-create the entire schema for deployment without needing migrations.
SQL is very complex, T-SQL is turing complete, meaning you can do lots of damage. you can bring servers to a halt if unchecked. It's pretty hard to restrict what can be done keeping flexibility.
Postgres isn't 100% set up to let fully untrusted clients do stuff. For example, right now there is no access control around listen/notify. You'd have to do some baseline query pre-processing to be completely secure, which could be a non-trivial task depending on just what you want to let clients do.
I've done it in read-only internal business apps though, it's great.
I really love postgrest!
I did a lot of django before and found myself constantly wrestling with the orm to express what I wanted, then writing endless pages of serializers and view sets and had to debug why they were always slow.
Postgrest has a learning curve, but the performance boost vs django is huge, and I can use more advanced db features like triggers more easily and in a way that’s native to the overall project.
I've been witness of a fully functional http client wrapper written in PL/SQL on Oracle. It's been working very well on REST and SOAP APIs. Codebase was a mission critical system for a large financial organization.
Development soeed was advantage, but the trade-off was that the good database developer skill is still rare and you had to grow and teach other [junior] devs for years. They used to stick with the team much longer time than the average developer, but still I believe it is a disadvantage.
What about PostgREST, the biggest issue I have with it is a DB server being available publicly in the net, I usually try my best to either place DB servers in the private network or "hide" them.
Other than this argument, it's a pleasure to develop on that low level. SQL is an important skill and it's strange why so many devs know it superficially.
> What about PostgREST, the biggest issue I have with it is a DB server being available publicly in the net, I usually try my best to either place DB servers in the private network or "hide" them.
I've heard this argument many times (and thought it myself), but when dealing with postgrest it seems that if you have a proper JWT setup (which is how postgrest handles AuthN) and use postgres' security features (like row level security) perhaps it should not be thought as a rule anymore.
IMO it seems like having the api layer only assume a role and having the DB handle AuthZ would mean better security since you can implement more fine grained rules that are actually verified by the part of the stack that knows the data structure already.
It's also not allowing arbitrary SQL, it's translating from HTTP to SQL, so nobody can do "SET ROLE 'admin';" unless you write a specific SQL function that does that.
Surpringly enough, crt.sh is pretty much all built in PLSQL and PLpgSQL.[1] I'm sure there are advantages to running it all from the database engine itself, but finding devs that can work on it must not be easy.
That's surprising, SQL is one of the most well known languages in the world. plpgsql has its quirks, but as languages go, it's pretty straightforward, most things do what you expect them to do.
It's not the language but more about how the data is store (data structures/indexes) and what you're going to do with the data.
At least during my time as a developer, I've come across many people that didn't understand this. When asked why they want to use elasticsearch over RMDB is it because they wanted Trie over B+tree? They didn't understand. Also the use cases almost always relational. Postgres have good enough FTS actually if anything elasticsearch is almost always a complement database not a replacement to RMDB.
Honestly folks choose elasticsearch only in part for Trie over B+tree. It provides a fair amount of magic wrt to appropriate defaults for dealing with large datasets in a distributed store.
I think a lot of folks would be better off with RMDB, but if you barely know SQL and spend most of your time making UIs, you’re lucky to have the breadth of know-how to configure Postgres the right way (no offense intended to frontend developers).
Of course, Elasticsearch’s magic defaults expectations may come back to bite you later on when you’re using it OOTB this way, but it’s hard to argue with throwing a ton of data in it and then -POOF- you have highly performant replicated queries, with views that are written in your “home” programming language, without even really necessarily understanding what your schema was to start with (yikes, but also, I get it).
I made something[1] similar in rust inspired by this project.
I made modification to the syntax to make it more ergonomic in the case of grouping filters with parenthesis for complex and nested filters.
I also added joining of tables, renaming columns and more.
Given that, despite the project name, the tool described by TFA is pretty obviously RPC over HTTP rather than anything resembling Fielding's description, graphql would would just as well.
Feel free to. But maybe consider that "taking advantage of HTTP" != "REST".
> PostgREST is REST.
The primary if not sole distinction of a REST system according to the creator of the concept is hyperlinking. From what I understand of postgrest, hyperlinking is nonexistent.
> For example URLs map to resources
Which doesn't really matter when these URLs are magic strings. It's also not really true, URLs are a mix of procedures (/tsearch, literally everything below /rpc) and function names, really, to be passed a bundle of parameters through query strings.
And the project itself recommends using stored procedures (and views) when exposing the system to any sort of untrusted environments.
> it uses HTTP verbs
Not actually relevant to REST, and serving as little more than a form of namespacing.
> URLs are a mix of procedures (/tsearch, literally everything below /rpc) and function names
Perhaps you'd be surprised in knowing that a resource can be a stored procedure, quoting Roy Fielding[1]:
"a single resource can be the equivalent of a database stored procedure, with the power to abstract state changes over any number of storage items"
In general I think PostgREST's REST implementation is evolving. Also, we've had Roy Fielding giving feedback[2] on an issue before.
Once we fix that issue, we'll ask him if he thinks if PostgREST is REST. I have a feeling that he might reply positively :-]. We'll see.
> Perhaps you'd be surprised in knowing that a resource can be a stored procedure, quoting Roy Fielding[1]:
I'll direct you to the title of the post to which this comment is associated.
>> REST APIs must be hypertext-driven
> I am getting frustrated by the number of people calling any HTTP-based interface a REST API.
Now maybe I missed all the hypertext in postgrest, but given all of its documentation obviously fails the criteria of
> A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server).
I don't see how it could be in any way REST.
> Also, we've had Roy Fielding giving feedback[2] on an issue before.
That is feedback on an issue of HTTP implementation and compliance, it has nothing to do with REST.
Now look I really don't mind APIs being good HTTP citizens and having nothing to do with hypermedia, and that REST is an interesting idea doesn't mean it's a good idea (at least for programmatic APIs).
But it's like people saw a picture of a baby in a bath, went "well I don't need the pink thing in the middle but I'd sure like to wash up a bit", and when others point out they're carrying around a jug of soapy water which they insist is a baby called Fred those others get called "Baby purists"[-1].
This sort of proves the point though, doesn't it? REST as initially defined has become some kind of hyperlinked / referenced platonic ideal that very few do or even attempt.
Postgrest, returning/modifying predefined resource(s) based on standard HTTP verbs and whatnot, is at least as RESTful as most other things that use the term. Despite the imprecision, this is still a somewhat useful descriptor beyond "oh it's arbitrary RPC". And it's not at all like GraphQL
> This sort of proves the point though, doesn't it?
It definitely proves that postgrest is not rest in any way.
> REST as initially defined has become some kind of hyperlinked / referenced platonic ideal that very few do or even attempt.
Sure? At no point have I been arguing for doing REST. Just for not calling things REST when they obviously are not?
> Postgrest, returning/modifying predefined resource(s) based on standard HTTP verbs and whatnot, is at least as RESTful as most other things that use the term.
While I completely agree that PostgREST's qualifier is entirely as worthless as every other thing calling itself REST, I don't think that's praiseworthy.
> Despite the imprecision
It's not imprecision, it's actively lying. It doesn't do what it says on the tin, and the tin doesn't describe what it actually do.
REST is an actual acronym, not an arbitrary trademark, it stands for words which have meaning.
Just personal taste. Without knowing the actual use case it doesn't make sense to judge. However, there's one thing that GraphQL CRUD APIs will always win over REST CRUD APIs and that's n+1. Smart enough GraphQL middlewares can turn any Query into a single SQL statement whereas with REST it can very easy become a waterfall of hundreds or thousands of requests. But again, if you don't have nested data you might not this feature.
PostgREST has solved n+1 with resource embedding[1] since 2015(maybe a year before Postgraphql/Postgraphile was born). So GraphQL does not pose an inherent advantage over REST for doing this.
Big thumbs up for PostgREST. While it hasn't been completely issue free for me, it's a great project making a ton of progress and the community is very helpful and responsive.
I found myself butting heads with the limitations of the API quite a bit, but since it has a wonderful RPC feature, you can always drop a custom endpoint to do what you need to do without completely ejecting.
You can also surface other systems with PostgREST, using foreign data wrappers. This is great because you can use Postgres's rock solid role system to manage access to them. FDWs are surprisingly easy to write using Multicorn, and you can get pretty crazy with them if you're fronting a read replica (which you should be doing anyway once past the proof of concept stage).
Anyone here using this in production? I’ve always thought it was a neat idea, but haven’t heard of anyone building their API-oriented-SaaS company around it.
I've used it for admin tools at real jobs. And once powered a read only "consumer" (writes came from offline webscraping) hobby project with it to make a no-backend-code react app. I see no reason it couldn't work in prod, especially if you had a CDN or HTTP cache in front.
What I found most striking is that it relies on postgres for just about everything. Content obviously (sometimes straight from tables, sometimes via db views), but also users and permissions. I'd first assumed there would be a config file a mile long but it really is all Postgres.
Tried at scale, but it was simply not performant enough. All the additional things that you can extract before getting to db have made it too slow and overloaded databases.
It may be good for small-medium projects but when you process millions of heavy computing requests - its not for that.
Hey, can you elaborate more on how you were using it and where it was failing? I'm genuinely curious as we use it at Supabase.
I assume you were using it with high throughput? We are benchmarking it at around 2000 request/s now [1], and finding it's better to scale it horizontally rather than vertically.
> process millions of heavy computing requests
Was this reads from the database? Was the compute happening inside a Postgres function/view?
Issue was with authenticating the requests. Pgcrypt was too much together with geospatial operations. No database instance could handle that - or it was so expensive it was not worth it.
Got it, good to know. So you moved all the geospatial operations out - what did you use? I want to make sure we can support enterprise later, even for these workloads.
I've learned you're better off building software that matches exactly what you need right now and change it over time as requirements change. These tools are great for a prototype but you pay the debt in the future. This is coming from someone who built a system system just like postgrest for a different tech stack.
I’ve heard the same type of argument a long time ago against choosing wordpress when it was still relatively new (usually from custom webdev shops when talking to clients)
Even with WordPress the assumption that you pay for it in the future is valid. Albeit anecdotal, I have always needed to rewrite from scratch after first launching on WordPress. But that doesn't have to be a bad thing. Often with tools that help you bootstrap quickly, say ORM's, GraphQL, and in this case PostgREST there comes a time that optimizing or extending is more work than writing a custom build solution. You have to decide if time to market or validating your business model is more important than a potential premature optimization.
There are definitely the right tools for the right job. I work in enterprise on applications that will run for decades. It makes no sense for us to sacrifice long term maintainability to ship something fast.
I see how it can replace 80% of glue in my typical project. However I am not sure how to handle the remaining difficult 20%. Particulary where to put the business logic.
I don't want to have all my business logic in database. I don't want to write all business logic in SQL. SQL is not good language for this and the tooling is suboptimal: editors, version control, libraries, testing.
Is there a way to define stored procedures and triggers in some host languge (as with SQLite)? Or is the recommended way to add extra API handled by language of choice? But I don't want to do the same things in two different ways (i.e. querying by PostgREST and querying the DB directly)
Zedstore[1] is columnar storage layer for Postgres. It's not yet there, but I understood the goals is to get it in Postgres. Arrow as a format should make more sense with something like that in the background. I don't think you can efficiently pull the results in column oriented fashion right now.
This is not a good idea at all. One of the main point of a REST API is to abstract away your internal data structures for the clients that makes sense. Also refactoring and database migrations should NOT change the public API of any software, especially not a REST API. Now, when using something like this, even the most basic change will have a ripple effect on your whole infrastructure and break every client.
> It is recommended that you don’t expose tables on your API schema. Instead expose views and stored procedures which insulate the internal details from the outside world. This allows you to change the internals of your schema and maintain backwards compatibility. It also keeps your code easier to refactor, and provides a natural way to do API versioning.
Which is, you'll notice, the same best practice used for decades by DBAs who need to serve multiple applications connecting to the same database.
(Also, it's not exactly uncommon that a data store API would have a single client which you control - eg. a single webapp, an internal application... in which case you can go nuts with breaking changes.)
> Which is, you'll notice, the same best practice used for decades by DBAs who need to serve multiple applications connecting to the same database.
Clearly, the Database's job contrary to what the name suggests is to not only stick data into it but also a bunch of business logic that's essentially untestable. No one needs domain models when you've got database tightly coupled with behavior and logic of your business.
What could go wrong?
The whole point of the API is that it is a repository for accessing the data from one or more databases, and packaging it up for the user to consume. It imports domain models which are totally isolated from the rest of the dependencies. The data doesn't even have to be in the database and can come from multiple sources such as S3 or whatever. It can consume data from the user and kick off various processes or insert into the database. It is a generic interface that does more than just CRUD in the DB.
You can use pgTAP for postgres unit testing, with docker you can make the process pretty fast too. Id highly recommend looking into the features/extensions of postgres, there's a lot of great stuff that goes beyond just being a database which can save a lot of time and grief depending on the application.
There is a risk of mixing storage/business logic, and you might end up being tied to postgres, but it's manageable using functions/views/schemas. And its also a really convenient place to do some business logic, as all your data is right there and available via a very expressive/extensible query language.
If your API needs to serve data that lives in multiple sources, then PostgREST is definitely not the right tool.
It is, however, a pretty common design to have all your data flowing into a database, and being served from there. Indeed, it's probably the simplest and most common design for a basic web application. PostgREST or Hasura can do excellent work there.
I went all in on Postgraphile, directly tying API's to the database schema. And you know what? I LOVE IT. I love it so much I've migrated all other API's in the business over and got to delete thousands of lines of code that weren't necessary anymore. 99% of the time, REST API's are just glue code, mapping HTTP data to database data and vice versa. And for that 1% that you need, for example backwards compatbility with an earlier version of your API, there's computed columns, set returning functions, views, etc... You can keep your API compatible with older clients and keep moving your schema forward.
But the most liberating thing about Postgraphile is that I get to spend a lot more time on the datastructures and frontend development. And it's really paying off, features can be delivered much faster, and you don't feel useless writing glue code all the time. This thing for real improved my life.
This was interesting, thanks. Have you also given Hasura a go? If not, would you? In your view are Hasura and Postgraphile reasonable substitutes? I’m not talking about non-hosted vs self-hosted, but rather capabilities, tooling, developer experience, etc. Would be great to hear your thoughts, thanks.
I've tried Hasura and it worked fine, but went with Postgraphile at the time because it was much easier to integrate into an already existing REST nodejs API. Plus I like the fact that Postgraphile leverages the database as much as possible, whilst Hasura implements things that could be done by the database. It's also much easier to extend Postgraphile, you just use Postgraphile as a library in your nodejs project and add as needed, allowing you to run entirely from one codebase. With Hasura you have to use schema stitching to extend its functionality, meaning you have to keep 2 pieces running instead of 1. When it comes to developer experience, Postgraphile is rather bare in its default form, you definitely want to add a bunch of plugins to make the resulting GraphQL side as powerful as you need it to be. Hasura is much more of a blackbox, you set it up and what you see is what you get. Postgraphile is much easier to mold into exactly what you want.
In the end they are projects with semi-same goals but different approaches. I went with Postgraphile and I haven't regretted it one bit, but no reason you can't try both, they are easy to set up and get a lay of the land.
> One of the main point of a REST API is to abstract away your internal data structures for the clients that makes sense.
One of the points of database views, which are much older than REST, is to abstract away your internal data structures behind facades that make sense for the clients.
> Also refactoring and database migrations should NOT change the public API of any software, especially not a REST API.
That's...inaccurate. Refactoring, including of the base storage layer shouldn't. The exposed schema of a database is an API, which is logically distinct from the base storage layer. Some designs may just expose base tables and handle mapping for the client outside of the database, but there's no fundamental reason it has to be that way.
> Now, when using something like this, even the most basic change will have a ripple effect on your whole infrastructure and break every client.
No, it won't. PostgREST exposes a single schema. There's no reason that schema should contain low-level implementation, just the views defining the public API.
> One of the points of database views, which are much older than REST, is to abstract away your internal data structures behind facades that make sense for the clients.
It makes sense to put all the logic into DB views and ditch the domain models. Forget unit tests and functional tests against company's business logic - those are pretty obsolete concepts. We don't need robust software, we need quick and dirty ducktaped APIs. Who needs data validation?
> It makes sense to put all the logic into DB views and ditch the domain models
DB views are a mechanism for representing domain models just as much as classes in an OOP language are.
> Forget unit tests and functional tests against company's business logic
Implementing functionality in the DB changes how you implement and execute tests, but it doesn't prevent testing.
If you are using a DB at all and don't understand how to test functionality implemented there, that's a problem, sure, but the correction to that problem isn't to just minimize the functionality in the database and continuing to fail to test it.
I totally disagree with this approach. Sure, SQL can be used to express domain models.
Let me raise a few questions for you:
- Let's say we want to change the database from postgres to Oracle in the future. How do I go about doing it?
- How about complex logic that needs to be done in a declarative language such as SQL? SQL was not developed to write logic. You could do a lot of things in it. It is turing complete but doesn't mean you should.
- How do you debug SQL views?
- How about version control and updating views, with traceability?
- Do you think SQL is more readable for logic code than say Python? Surely, basic logic can be represented in SQL. But IMO it suffers readability.
- How about CPU/memory consumption and how do you manage to vertically scale?
You're trying to use a tool (views), not for its intented purpose. It was not meant for sticking your company's entire domain model.
This is a completely wrong approach especially in enterprise environment. Might be ok with a small project.
> - Let's say we want to change the database from postgres to Oracle in the future. How do I go about doing it?
If by “database” you mean “storage backend” (either in whole or in part), then the answer is oracle_fdw.
If you mean the API implementation, then, just as if you wanted to use a different language/platform when it wasn't implemented via Postgres originally, it's a complete reimplementation of that layer.
But, really, changing DBs isn't a root need, it's a solution, and unless we know the actual problem, we can't determine a solution (and “switch DBs to Oracle” likely isn't the best solution.
> How about complex logic that needs to be done in a declarative language such as SQL?
Complex logic is often more clearly expressed in a declarative language.
OTOH, to the extent there is a need for procedural/imperative logic, Postgres supports a variety of procedural languages, including Python.
> How about version control and updating views, with traceability?
There are a number of variations on approaches for this with database schemas in devops pipelines. Whether your schema is just base tables or includes views/triggers/etc. doesn't really make any difference here.
> Do you think SQL is more readable for logic code than say Python?
I think most appropriate of SQL, pl/pgSQL, and Python for each component is more readable than just-Python
> You're trying to use a tool (views), not for its intented purpose. It was not meant for sticking your company's entire domain model.
That’s...exactly what views were designed for. For a long time the implementations in most RDBMSs weren't fairly limited, but that's not really the case now.
> This is a completely wrong approach especially in enterprise environment. Might be ok with a small project.
Honestly, LOB apps in an enterprise environment is probably where this approach is most valuable. It might not be right for your core application in a startup where you are aiming for hockey stick growth. At least, from the complaints I've heard about horizontally scaling Postgres, I'd assume that.
I love this book and it explicitly addresses a lot of painpoints in designing complex systems. For example, business counterparts might request writing to CSV instead of database. You want to decouple domain models from RDBMS.
> You keep talking about domain models and views as though domain models aren't written as well as read.
Views can be writable (views with non-trivial relations to base tables require explicit specification of what behavior to execute on writes through triggers, but that's obviously true of classical domain models, too.)
I'm not sure about postgrest, but with postgraphile you can control field exposure and add getters/setters fairly granularly.
Regardless, that's no reason to say it's "not a good idea at all." having a schema change ripple through the stack is just a trade-off, which is often even desirable. If engineers are shipping code to production with postgrest, and attributing it in part to their success, maybe reconsider the rigidity of your architectural thinking. https://paul.copplest.one/blog/nimbus-tech-2019-04.html#api-...
This is very shallow, unimaginative thinking. Of course you'll abstract away your internal data structures: just don't expose the base tables, and give access through a client-specific schema comprised of tailored views instead.
After reading your comment I expected to see low to mediocre activity on their repository... however, 15.1k stars seems to indicate that many do think it's a good idea.
I know GitHub stars are not a measure of whether something is a good idea or not - but why do you think so many have positively engaged with it?
because actually is a great idea. maybe there are some cases where it’s not the best option, but many times it’s much better option than having to code a custom api code and you can invest that time in building more features instead of dealing with plumbing.
Lots of developers in general doesn't seem to understand the concept of a REST API. What are the benefits? Why are we doing this at all? How to do it properly?
Most APIs are not REST, just some RPC-like APIs over HTTP.
When you have a tool like this, it's just easier to use it and call it a day - without thinking too much or learning the concepts properly.
I think the point is that many people in the wild conflate the term “REST API” with “arbitrary interaction over HTTP”. We’ve all seen examples, the one-off endpoint to fetch that little tertiary bit of data, that one-off POST to invalidate a cache. None of that is REST
"HATEOAS" is pretty much the only thing REST contains. "REST API" without HATEOAS has no meaning, it's just a stand-in for "good HTTP citizen" (if even that).
I can see how an API like this would be a great fit for some no code solution integration. Certainly bad idea in some situations, but let's not be too quick to judge the usefulness of the project.
The biggest problem it has great entry difficulty. Few years ago i stuck at building custom social auth with postgres, even tho postgrest explained basic principles of authorization using roles, it required a lot of sql code to work, a lot like a hundred lines of sql code, this 100 lines is a paradise for mistakes for someone writing sql functions first day.
How many tables are you working with? Do you solve the over/under fetching problem or just ignore it? Do you support all of the filters that postgrest does? How about security? How do you authorize requests
I worked on a CRUD service for three months before replacing it with Hasura.
Your application server layer doesn't go away, it just thins up. Add your middleware and tracing provisions as needed around the DB IO boundary per usual.
*worse performance, because you drop more things that often happen before accessing db -> on the level of database - Simple as that its not going to spend 100% computing power on things you would expect it to..
So you get worse performance and harder to maintain code.
People who don't want to learn SQL tend to write CRUD API and rely on code-first ORMs to manage the database.
When you instead use PostgREST, or Hasura, you're usually going to write a ton of SQL - tables instead of classes, views and stored procedures instead of interfaces, row-level security rules instead of authorization code.
A “ton” is very relative. Those tables still have to be written even with ORM. And as for views/rls/triggers the total LOC i would say is orders of magnitude smaller then in traditional stacks and a lot more readable (it’s static definitions vs imperative code)
How much effort of human mind is being lost to solve these ORM or JS frameworks problems. Programmers community is fighting with problems which at first should not even exist. With good tools we shouldn't even bother about them
Hasura provides a similar service for graphql along with many many other features along with a enterprise tier. Worth a look if you are considering tools like this.
Half of the time when arguments against JWT are made is because they are not using it correctly, think it is supposed to do something when they are not meant to do that thing, or are quick to dismiss JWT without being able to point to an alternative that solves their stated deficiencies. Your journey for evaluating if JWT fits your use case should start here: https://tools.ietf.org/html/rfc7519
Heroku is still around and doing fine, so what? It’s definitely far more expensive than it’s worth if you know what you are doing, but if you just want to host a dumb rails app without thinking about anything Heroku is a great option. If you want to ease into hosting your own code, Dokku is a great option. No need for the put down
Why should I? the whole idea is error prone. I have seen, many times, in the wild, excessive data exposure thru REST APIs. Lowering the bar does not help.
PostgREST is far more secure than most API servers.
Any API server can leak data. Most API servers define their own, native, completely one-off security system that does all checks in-app and logs into the database as a superuser. The framework-du-jour is basically an insecure rooted processes by definition.
PostgREST logs in with zero privileges. It then uses Postgres' native role based row-level security system to control access. Are you saying that is insecure?
PostgREST is not inherently more secure, it is as secure as your RLS policies and session mapping. It comes at a price of tight database coupling. When using a cache layer you might need to write an api level security layer anyway.
Do you remember the old days, when desktop apps (many written in Delphi) connected directly to the SGDB, relying on views and stored procedures? This kind of "server-client" software is still very common inside corporate LANs. Since the browser speaks a very limited set of protocols, projects like postgREST are just a bridge between frontend and storage, thus moving all needed business logic inside into the SGDB. This kind of model is hell. It's history repeating itself all over again.
What RDBMS did you use in on those times? PostgreSQL is pretty powerful these days(RLS, Extensions, FDWs, json functions, etc). I'm lead to believe that older RDBMS's didn't offer all the rich functionality we have today.
We have built client libraries for Javascript, Rust, Python, Dart, and a few more on the way thanks to all the community maintainers.
Supabase employs the PostgREST maintainer (Steve), who is an amazing guy. He works full-time on PostgREST, and there are a few cool things coming including better JSONB and PostGIS support.
We recently benchmarked PostgREST for those interested: https://github.com/supabase/benchmarks/issues/2
nb: i'm a supabase cofounder