Embedding business logic/application code in a database is a very different development model than using database as a relatively transparent record store.
I am not against having business logic in database, there are very legitimate uses for that. However, few things must be noted.
First, code is idempotent: you compile, package, deploy code, then nuke it all out and replace with different version, be it older or newer. Swapping application binaries/images/containers is the gold standard in deployment. Databases are... Perpetual. Rolling back DDLs without rolling back data can be a very complex exercise.
Second, there is no automatic separation of interface and implementation in databases. Database engine will not automatically enforce constraints embedded in stored procedures. Which means extra care must be taken and additional processes must be implemented to control access, otherwise you risk corrupting data.
Embedding application code in database breaks separation boundary, which changes project management. Now you have database development and deployment much more closely tied to application code development and deployment. Without huge care in project management this effectively inhibits any non-linear development flows.
Yeah, I'm interested in knowing how people manage it. With Redis, I usually put the Lua code within the same repo and embed the Lua code within the deployment artifact, so it gets tested and deployed together with the rest of the code. Then I call SCRIPT LOAD at the application initialization and I get a SHA-1 hash to call the script.
At the very least, this approach makes things feel much safer. I would like to know if there better ways to do it, and if there are similar approaches for Postgres procedures.
the ddl that creates the views/tables/triggers that replace the "idempotent code" should also be idempotent so you should be able to roll it back without rolling back any data
actually i wrote a couple tables/stored procedures that (should probably mostly) do this for postgres after looking at some other patch management libraries by depesz and steve purcell a couple weeks ago
Embedded JS enables pushing down application logic into the database, which can result in shocking gains versus pulling each record out of the database and into the middleware. In one project I worked on, we got 100x performance improvement, not a typo.
Plv8 is widely available from cloud hosting providers including AWS, GCP and Azure.
I don’t know if this is just be a weird artefact of running in Docket on an Arm Mac, but I was surprised to find it substantially faster to run my tree walking code on Node and call out to Postgres than run it all in Postgres with plv8. Perhaps it was using a fresh interpreter per transaction?
my favorite (plv8, but relevant) was working at an esri r&d center, and having corporate remind us that we needed to include esri's json format in our product offering in addition to geojson (our supported format).
I was away at a conference (jsconf, amusingly), and simply added the arcgis to geojson converter that was already written in javascript as part of another open source project we had released, and voila, we could suddenly support it in the database. one of the product engineers quickly added an additional database call to the backend, and we were all set.
simple problems, simple solutions, especially when dealing with json conversion.
(I haven't tried pljs but I think this is relevant) I've used JavaScript UDFs in Snowflake for array processing and also dumb POCs things like importing an entire library for useragent parsing