Hacker News new | past | comments | ask | show | jobs | submit login
Prisma 2.0 – Type-safe and auto-generated database client (prisma.io)
210 points by janpio on June 9, 2020 | hide | past | favorite | 107 comments



Prisma's architecture seems novel and ... a little strange to me. It works by running a Rust engine as a subprocess and then communicating with the engine from JS land over a non-spec compliant GraphQL API. The engine holds the actual databae connection pool and does all the SQL generation and data marshalling. See https://www.prisma.io/docs/reference/tools-and-interfaces/pr... for more info on this arrangement.

It has some weird ramifications though:

- when they go to implement a new feature (like recently added JSON column support) they have to implement it on both sides which can cause bugs like this: https://github.com/prisma/prisma/issues/2432

- they're a little limited to the semantics of GraphQL based RPC, which namely excludes any stateful stuff like arbitrary START TRANSACTION; blocks that might or might not commit. See https://github.com/prisma/prisma-client-js/issues/349 for more info on that

- they don't run everywhere JavaScript runs like the browser or Cloudflare Workers (unless there's something fancy that compiles the engine to WASM I'm not aware of)

I wonder if their intention is to re-use the engine between different JS processes for caching / sharding or something like that, or to add Prisma clients in other languages. Why create the indirection?

I do like Prisma's type safety compared to the pure TypeScript alternatives like TypeORM and MikroORM -- it's really good at typing the results of specific queries and preventing you from accessing stuff that wasn't explicitly loaded. The style of the query language is the cleanest I've seen out of the three as well IMO.

Edit: I think node modules can install arbitrary binaries to some serverless JS runtimes, not sure specifically about Cloudflare but I know their dev tool bundles JS using webpack, which would exclude other binaries from node_modules.


A few things to note:

- Prisma 1 was a completely independent server and Prisma 2 was most likely started as a rewrite of Prisma 1 so it followed the same approach

- This indirection will be removed if someone can finally land a Rust binding to NAPI (looking at you Neon binding people)

- Prisma plans to support multiple languages thus it makes sense to have an agnostic engine

- This not far from having a PG engine coded in C and interfacing with like like most libraries do anyway, javascript is just too slow for this kind of stuff


I generate Typescript types from my database (https://github.com/kristiandupont/kanel) which gives me type safety on back- and frontend without relying on an ORM. I am curious about Prisma but I don't see any advantage to it from my quick skimming.


I do the same using kanel. It's just enough to make the typings smooth without dictating anything else about how they are used. I prefer to write the queries directly in sql using pg-promise and then type the results of the query and the parameters of the query using the output of kanel. Any changes to the db result in generating new typings followed by running the tests to make sure nothing broke.


What do you use to write the queries? Some query builder?


I use a home made library on top of Knex that is not open source yet. I will extract it from the Submotion code and release it as well, but I am not sure when.


knex seems to be what everyone use, has a migration tool too


Thank you for writing this up hbrundage - that's a pretty good summary.

I'm the co-founder of Prisma, so should be able to answer some of your questions :-)

Prisma has a Query Engine written in Rust, and a language binding for each target language. Currently we only support JavaScript and TypeScript, but a binding for Go is already in the works. As Sytten alluded to, this split allow us to write and test all the logic once, and have a relatively thin layer that is only concerned about presenting an ergonomic API following the idioms specific to a given language. Now, as you mention, this introduces a bit of extra work on our part, and the potential for bugs when the two sides don't add up. But this problem is very minimal in practice, and in fact most features can be implemented with just a change on the Rust side, as the language bindings are generated based on a API description emitted by the Rust binary.

Another reason for the split is performance. It's reasonable to ask how performant a library that simply marshals some data from a database really has to be. But it is important to realise that Prisma Client is quite a bit more ambitious than that. Where other libraries usually tries to generate a single complex query, Prisma will often issue multiple smaller queries and partly join the data in memory. The throughput difference between V8 and Rust is significant here.

You are right that our architecture precludes us from doing things like explicitly starting a transaction and keeping it open for a longer duration of time. Our long-term goal is to create an Application Data Platform for medium-sized software development teams that can't afford to invest in internal infrastructure to the same degree as big tech companies. If you are curious what this might look like, you can take a look at TAO at Facebook or Strato at Twitter. For long running transactions specifically, we believe that they are often misused by developers who think they get a certain guarantee that they don't actually get from wrapping their workload in a transaction. There are often better approaches - both more correct, and easier to reason about, and that's what we want to teach people.

Currently we are building 30+ different binaries for each release in order to support most sensible platforms. This is a pain for us, but I hope most of our users will see that this is something they rarely have to worry about if at all. We believe that WASM + WASI will enable us to eventually remove the need for the binary for Applications running on Node, but the ecosystem is not quite there yet.

Ultimately, I think the biggest step forward represented by Prisma 2 is the type safety and result typing. We have been pushing the TS compiler to its limits, and I believe the developer experience speaks for itself. We have a lot of work to do in order to build out the feature set, but I hope many developers will appreciate the improved ergonomics, and trust that we will work diligently over the coming months to add the features that they need.

Thank you for looking into Prisma!


Ah, that explains a lot, thanks for the breakdown. And yeah, Prisma 2's type safety is stellar and in a league of it's own.

With respect to building out a big-boy operational datastore -- I think that's really cool. It'd be nice for me to be able to use something like TAO or EVCache or what have you without having to build it all myself, that's for sure. I understand why Prisma's API is constrained compared to a regular relational database in order to support those needs. That said, I think that the very best (and certainly most sell-able) Application Data Platform doesn't require adopters to drop key abilities or semantics they are used to in order to switch away from a normal database. I think those semantics only need to be dropped at the kind of scale which very few Prisma users are ever going to reach, yet they pay the productivity penalty for those missing semantics from the very first moment they begin using the tool.

Yes, you can do a lot of the same things you might want to do with transactions with nested or batch operations, but, not everything. For example, Rails' transactional testing feature is battle tested and seemingly well loved by the community, and currently impossible with Prisma. Instead, you must use a slower and more error-prone database cleaner tool. Another example would be a bank style database with double entry accounting. You want to decrement one account by a certain amount and increment another account by a certain amount transactionally, but only if the from account has a total greater than the certain amount. `SELECT FOR UPDATE` to the rescue in Postgres, but negative account balances with Prisma.

Teaching developers to not hold transactions open for a long time, or to use smart, efficiently implemented nested inserts is a good thing without a doubt, but you could still do that education while preserving transaction semantics. Devs have been used to having those since the 70s. The two aren't in conflict if you ask me. It would make your life harder, that's for sure, but it would make my life as a potential user easier, and remove one argument for not switching over.


AFAIK Cloudflare workers run JS/WASM in a V8 isolate, don't have Node.js APIs, and block eval()/new Function.


I don't see how useful it would be to query your database from the browser or cloudflare workers, in both cases you want the database access closer to your database to reduce the RTT. And you definitely don't want to give the browser direct access to your database - even if that were possible.

The implementation choice seems odd to me, nonetheless.


You're totally right about connecting to your database from the edge. That said, I think JS is quickly becoming the target for a lot of hosted runtimes because it is so easy to sandbox and has the option to drop down to WASM for high performance and indirectly supporting other languages. Cloudflare Workers (and Fastly, and Superfly, and and and) are all following that path at the edge, but I think as the consensus builds around JS + WASM as a server runtime, we might see the same style of environment for more traditional workloads that might wanna connect to databases.


> You're totally right about connecting to your database from the edge.

You might want to from your container on the edge, and separating off the work to another process makes having single-process lightweight containers more difficult (do you have multi-process container? do you sidecar the workers? etc).

So yes, I too found the architecture a bit odd. I have also seen it in https://mediasoup.org where it makes more sense to use more native workers, but carries same multi-process challenges.


That's true, and somewhat ironic as I find myself building just such an environment because NodeJS is nearly impossible to sandbox securely, but V8 is built for that use case. Prisma (or anything else with a native dependency) would not run in that environment.


The RTT is the same for the client either way isn't it? Is the concern that you're wasting DB resources instead of intermediate resources, ie the real concern is extended db connection time?


Yes, with transactions the big concern is transaction time which is often dominated by the RTT.

With performance in general you want your chatty transacting code as close to the database as possible and to merely invoke it from afar. Then you have many very short RTT from the transaction, plus one long RTT before and after. Stored procedures or functions are actually the optimal here.


I empathize with the frustration that this library is trying to solve. It's pretty nifty too! Ultimately, however, I don't believe this is the correct approach.

The problem with this tool, like every other multi-SQL-flavor-SQL ORM and query builder, is that it requires users to learn yet another language. In addition to Node.js and SQL, users need to learn the Prisma query language. This is not trival, and users that are already accustomed to working with SQL will need to relearn PrismaSQL.

I think the best approach to this problem is a single-SQL-flavor query builder that attempts to match SQL as closely as possibly while adding in the niceties of being able to pass in JavaScript objects instead of raw SQL strings. Lets be honest: raw SQL-in-JS is no fun.

This would lead to PostgreSQL, MYSQL, SQLite, etc-specific query builders. Knex is close, but it ultimately doesn't work for most because it's missing some language-specific features (e.g. ON CONFLICT DO UPDATE). While this doesn't exactly meet the type safety benefits of Prisma, the benefits in ease of use and feature-parity of a language-specific query builder far outweigh the difficulties of learning a new query language like Prisma.


I get where you are coming from, but from your comment I am guessing you have yet to try Prisma.

I'll speak to my personal experience...

I became allergic to ORMs after experiencing much of the pain that you describe. Like you, I quickly found ORMs were simply an additional domain language / abstraction over my database that provided more pain that usefulness. Every time I wanted to make I change to the code I had to wade through tons of docs and/or stackoverflow posts by other frustrated users. If I wanted type safety I had to express and maintain types/decoders/encoders myself. Huge pain, and things always got stale leading to a massive mistrust in my data layers.

Prisma doesn't feel like those experiences. Their schema-first, client code gen approach works surprisingly well. Using the generated API feels really intuitive, and TypeScript is there the whole time providing guidance and autocomplete for me. The object tree query syntax is quite refreshing compared to the builder pattern approach taken by the alternatives. I always found the builder pattern overwhelming and often a guessing game at how to compose them.

I think Prisma doesn't try to be too clever with their data API. They solve the 99% in a manner that is simple and convenient, for everything else you have the raw query API - much like other solutions.

I'd suggest giving it a try. You may like it.


That sounds awful, and completely different from my experience with ORMs. My experience has almost exclusively been entity framework, which despite having some warts (rank over partition queries are impossible) has been a very pleasant experience.

One advantage is the additional domain language is also the language of array/list manipulation, and not having to maintain any encoders/decoders (I honestly don't know what these are).


Interesting, coming from Eloquent ORM (PHP), I hated Entity Framework. It seemed to want to do far too much clever stuff (like save an entire object graph at once), and didn't have nearly as many escape hatches as I'm accustomed to (Eloquent will let you inject raw SQL nearly anywhere). I've had positive experiences like ORMs, but only when they are thin layers over the underlying SQL.


What was the problem with raw SQL you ran into with Entity Framework?

Entity framework follows the unit of work pattern from gang of four, and if I remember correctly eloquent follows the active record pattern.

When I started I found the active record pattern much more intuitive, but I prefer the unit of work pattern now. Mostly because I think unit of work works better with transactions/constraints then active record pattern.


Oh, there are two different kinds of ORM.

You are talking about one where the goal is to make querying the database more idiomatic on your development language, at the cost of some flexibility. This works very well as long as you stay within the bounds of the abstraction, and breaks terribly when you step out of it. The engineering goal is to make the abstraction just broad enough to represent most of the common queries without making it less idiomatic.

The second kind is the type that tries to abstract databases into a specialized query language. The goal here is to bring things you don't get on plain SQL (like type integration or a single DBMS independent language) without losing expressive power. That's the one the GP is talking about.


> and breaks terribly when you step out of it

Maybe you mean something else, but I haven't had any issues when I've had to break out of the ORM and write portions in SQL. (usually once or twice every couple of development man years)

I'm not sure what type integration is, but the ORM I'm most familiar with does allow spanning multiple DBMS's with the same code. (unless when you had to break into DBMS specific SQL for performance reasons)

Is type integration allowing static type checks against your query language? Entity Framework does this as well.

I think EF is the second type of ORM, unless I'm misunderstanding you.


The tax of untyped languages and/or simple/generic API abstractions over databases.


Nikolas from the Prisma team here, thanks a lot for your comment!

> The problem with this tool, like every other multi-SQL-flavor-SQL ORM and query builder, is that it requires users to learn yet another language. In addition to Node.js and SQL, users need to learn the Prisma query language. This is not trival, and users that are already accustomed to working with SQL will need to relearn PrismaSQL.

I'm not sure I'd fully agree with this! The "other language" in this case is an intuitive and natural API (in Node.js/TypeScript) for querying data [1], so hopefully, there won't be much overhead to "learn" anything new. It should be rather the opposite and pretty straightforward to pick up, auto-completion and type-safety will also contribute to making the experience of querying data fluent without much learning overhead.

We specifically decided to abstract away from SQL because we found that many developers don't feel productive with SQL as their main database abstraction [2] (that's also why so many people roll their own data access layers in the end).

> I think the best approach to this problem is a single-SQL-flavor query builder that attempts to match SQL as closely as possibly while adding in the niceties of being able to pass in JavaScript objects instead of raw SQL strings. Lets be honest: raw SQL-in-JS is no fun.

It sounds like your thinking is generally aligned with ours actually! The main difference is that we concluded that the query builder shouldn't be SQL-flavored but just a natural API for any Node.js or TypeScript devs.

[1] https://www.prisma.io/blog/announcing-prisma-2-n0v98rzc8br1#...

[2] https://www.prisma.io/docs/understand-prisma/why-prisma#appl...


Thanks for your response Nikolas!

Prisma's approach is well thought out, and I appreciate the new angle on an old and challenging problem. For Typescript users, especially new users, Prisma can be a big win. Abstracting SQL has huge benefits here (especially type safety/static analysis), and I look forward to seeing where this project goes.

That being said, my favorite database is PostgreSQL because it has so many features (and I'm just comfortable with it). At some point, a tool like Prisma (or Knex, TypeORM, etc) just cannot support all PostgreSQL features because it needs to support other flavors too. While some users may find this trade off acceptable, I always find myself hacking around the tool to use the raw features. Therefore, my ideal environment would be a full-featured PostgreSQL query builder.

TL;DR I see the benefits of Prisma, but they're not for me at this point


Do you use an ORM with PostgreSQL in Node.js right now?

I’m thinking of porting a project from Firebase to Postgres and whether to use an ORM and which one are hotly contested points in my research thus far.


You might want to consider Hasura. It's GraphQL layer on top of Postgres with realtime support. We're in the middle of porting a Firebase project to Postgres, and we're using a mix of Hasura, raw SQL, and knex. To be honest, we have more raw SQL (via `knex.raw`) than anything. We use the knex query builder for highly dynamic queries, and Hasura where we need realtime support.

If you want "more ORM" then Knex then `Objection.js` is a good option. I think the other main option in node land is TypeORM. Either of there is probably a good choice.


Specifically in Node.js, https://github.com/gajus/slonik has drastically improved the experience of writing and composing SQL queries. I am the author of Slonik.


We use slonik and absolutely love it. Alongside polymode (for inline sql mode) in emacs it’s wonderful to use. Thank you for writing and maintaining such a great library.


"I think the best approach to this problem is a single-SQL-flavor query builder that attempts to match SQL as closely as possibly while adding in the niceties of being able to pass in JavaScript objects instead of raw SQL strings. Lets be honest: raw SQL-in-JS is no fun."

You essentially describe Elixir's Ecto, which has been lovely to work with. You may want to check it out.


You need to try it 1st dude...

There is no query based language, the runtime API is generated from your DB layer, and is fully TS/JS, so there is no new language to learn. It is just TS/JS.

You don't know which API to use? Type a dot and all APIs is there, this is called Intellisense.

So, it is not `like every other multi-SQL-flavor-SQL ORM and query builder`

`I think the best approach to this problem is a single-SQL-flavor query builder that attempts to match SQL as closely as possibly`, you do not work with GraphQL, do you...

Knex. I think Objection.js is much better.


What you descrive is very similar to Rezoom.SQL for F#. https://github.com/rspeele/Rezoom.SQL

It has type checked queries in plain SQL (based on SQLite syntax), compile time consistency checks, autocomplete, schema migrations, and more. The normal queries have all the guarantees, but in case you might want to use some vendor specific features, it also has the option of vendor queries.

Very very cool library.


JOOQ from jvm world is exactly that, typesafe sql with dsl pretty much the same as good ol’ SQL


TypeORM? Zapatos maybe?



Yeah thanks I ever can't remember how to do markdown links so I don't even try


I'm super excited about Prisma 2! I feel like we finally have a database client that (1) is/will be very powerful and (2) is very approachable and easy for beginners to learn.

It certainly doesn't replace the need to know some SQL, but it does delay that which is great for so many people.

I'm definitely using this instead of any ORM for every project I can.

I really love having the fully typed interface for Typescript. Both for static type checking and for code completion in your editor.

We are using Prisma 2 as the default database client for Blitz.js [1] which results in a super nice stack. Especially because the Prisma DB types flow all the way into your React components.

[1] https://blitzjs.com


What are your thoughts on RedwoodJS?

https://github.com/redwoodjs/redwood


Great alternative if you don't care about Next.js and you absolutely want to use GraphQL.


If I understand correctly, transactions are currently not supported for doing any advanced business logic beyond building CRUD: https://www.prisma.io/docs/reference/tools-and-interfaces/pr...

Also, no mention of aggregations, or if someone could point me to it?

As far as typed query builders go, even though things like JOOQ are simply amazing, but I fear that the query building approach has not really caught on for database access and people seem to prefer "object oriented" methods like ORMs.

Any comments from HN folks about why that is?


People are scared of SQL or they want to use their objects for data access, business logic and views at the same time (leading to bad architecture and headaches), or they come from Nosql world, or they want a shiny new tool, or they do not understand features of their used db well, or because “everybody is using ORM zyz”... Just from the top of my head, there are probably more.


How can ORM be used with Prisma?


For those of you who prefer talks over blog posts to learn about new tools, I recently gave a talk introducing Prisma 2.0 and demoed how you can use it to build a REST API and a GraphQL API with a PostgreSQL database.

You can find the full recording here: https://www.youtube.com/watch?v=AnJxKWQG_fM


Not sure I 100% agree with their problem statement.

> the problem: Working with databases is difficult

Working with databases is a relatively solved problem. You can access them from just about any language on any platform. A more accurate statement would be: choosing the right access method to work with databases is difficult.


> A more accurate statement would be: choosing the right access method to work with databases is difficult.

For me, it's the fiddly bit where you interface between the programming language and the database is a PITA.


This is exactly how it's meant in the article btw!


Agree to disagree. For certain kinds of programmers who tend to think more analytically, databases are easy to work with.

For those of us who tend to think more visually or kinesthetically, I appreciate tools that are trying to solve problems at a less-lingual/code level.


Isn't a DB naturally embrace 'visually'? I mean, there is a table. Every time I think of a table, I get a picture in my brain... xD


Well said. Though Hasura makes it easier.


"The problem: Working with databases is difficult"

This makes me think of all brainstorming sessions I attended in my life with people asking over and over again, not convincing themselves with most answers: "Ok, guys, seriously now, what problem do we think we solving here?"


Scarily close to reality :D


:-P


Been using Prisma for a while. Nothing better than typing `await prisma.`, hitting ctrl+space, and having it autofill everything for you! :)


I'm really excited about Prisma and how it's contributing to the Node.js ecosystem. Two new full-stack frameworks, Redwood and Blitz, are based on it, and it's prominently mentioned in their READMEs. https://github.com/redwoodjs/redwood https://github.com/blitz-js/blitz

As a status announcement, this isn't quite as exciting as it sounds because migrations are still "experimental". Still great to see!


For those who rather work in GO checkout Super Graph its an automatic a GraphQL to SQL compiler. It works as a library or a standalone service. Also supports variety of auth schemes like Rails cookies, JWT, Firebase, etc. Super Graph auto-learns your database schemas and relationships. https://github.com/dosco/super-graph


Just as a side-note, with Prisma we're planning to support more languages beyond the Node.js/TypeScript ecosystem. We're currently already working on a version of Prisma Client in Golang, you can see the first prototype and track the development on GitHub: https://github.com/prisma/prisma-client-go


Use it to replace your ORM in your own GO app. No more having to struggle with joins etc just describe the data in GraphQL and Super Graph will generate the SQL for you.


this looks great do you know if anyone is using it in production?


Yup a few startups mostly using the standalone version. One even runs it alongside their Rails app on Google Cloud Run to add a high-performance graphql api to an existing Rails app. I'm speaking to a couple people who want to use it within their traditional GO REST API to query the database instead of using an ORM.


I don't have my head in backend too much these days and haven't kept up with all of these solutions. It seems like Hasura is the more popular option right now? or do they fit a slightly different space?


Prisma and Hasura are very different!

Prisma is a database toolkit that's used by application developers to develop server-side applications in Node.js and TypeScript (e.g. REST APIs, microservices, gRPC calls, GraphQL APIs, ..., anything that talks to a database). The main tool Prisma Client is a query builder that's used to programmatically send queries to a database from Node.js/TS.

Hasura is a "GraphQL-as-a-Service" provider that generates a GraphQL API for your database. This GraphQL API is typically accessed by frontend developers. That setup can be great when your application doesn't require a lot of business logic and the CRUD capabilities that are exposed in the GraphQL API fit your needs (though I believe you can add business logic in Hasura by integrating serverless functions).

With Prisma, you're still in full control of your own backend application and can choose whatever tech stack you like for developing it (as long as it's Node.js-based, though Prisma Client will be in available in more languages the future)!

By the way, we also love GraphQL. We're currently brewing a new "GraphQL application framework" that can be used on top of Prisma. That way it will be possible to auto-generate resolvers for Prisma models to reduce the boilerplate you need to write, while still keeping the full control of your GraphQL schema.

You can learn more about this here: https://www.nexusjs.org/


> That setup can be great when your application doesn’t require a lot of business logic and the CRUD capabilities that are exposed in the GraphQL API fit your needs (though I believe you can add business logic in Hasura by integrating serverless functions).

(I’m from Hasura)

You can extend business logic in Hasura in a number of ways, including (but not exclusively) ones that work well with serverless and async architectures. Other examples follow:

1. You can extend it by adding business logic in the database via user-defined functions. Eg: You want a fulltext search or a PostGIS function that is better off in the DB anyway.

2. You can bring your own GraphQL server with custom resolvers and Hasura will merge them into its own API and let you “join” across them as well.

3. You can bring REST APIs and add graphql types for them in Hasura and use it as custom resolvers that extend the schema as well.

Hasura’s key value add is an instant GraphQL API backed by your own data-sources (database, GraphQL, REST) and then a fine-grained authorization system on it.

Like Nikolas said, very different from Prisma. Hasura aims to add value as “infrastructure” by guaranteeing performance and security where as Prisma is like an ORM/database toolkit.


Thanks a lot for the clarification, Tirumarai! (Remote JOINs look awesome btw, congrats on the release :)


I love both Prisma 2 and Hasura! 2 awesome products!


It's my impression that hasura caters more to (also) those that need to integrate with an existing database and schema, possibly with views and functions - would that be accurate?

Does it make sense to slap prisma on top of an existing g production database?


(I'm from Hasura).

It was a definitely a design goal for us to make the existing production database use-cases as seamless as possible.

Instead of adding a new DSL on top of the database, Hasura maps much of the DML subset of SQL over to GraphQL (tables, views, functions) so that we're not re-inventing that bit and the translation is restricted to the "relation set" to "tree" transformation. json aggregation and json operations in Postgres are phenomenal! Hasura's authz RLS-like layer injects authorization in as well to make that GraphQL API actually useful.

JOOQ has probably done the most phenomenal job in mapping almost all database constructs to a native language library, but there's a solid amount of type magic there which I'm not sure is portable to every language.


> but there's a solid amount of type magic there which I'm not sure is portable to every language.

It is, but I've just been too lazy so far to actually do it.


It certainly make ssense to use Prisma with an existing database. In fact, Prisma is able to introspect your database and create a typesafe data access client for you.

If you give it a try and have a database handy, I bet you can have it up and running in less than 10 minutes.


Thank you both for replies (hasura and prisma).

Regarding prisma, I see my impression was off a bit from talking with our team that used prisma in a green field project with migrations quite happily. And I somehow forgot that migrations are still marked experimental and are kind of new.

https://www.prisma.io/docs/reference/tools-and-interfaces/pr...


What do you think of Hasura vs Postgraphile?

https://www.graphile.org/postgraphile/


Got it. Thank you both for your replies!


I knew someone was gonna mention Hasura here. What a stupid idea to have API mapped to DB one to one, then solve "the problem" of not being able to write custom logic by inventing some shit (Hasura action). I reckon the reason Prisma pivoted to ORM-alternative was that you realised how wrong such an idea was?

PostGraphile at least has the decency to acknowledge it in their docs[1] and suggest putting logic inside the database (not some "action" nonsense), which is an actual practice, albeit most don't like it. And they don't specifically tell you to hook client directly to the damn generated API!

[1] https://www.graphile.org/postgraphile/evaluating


Having used both, I would pick Hasura for any project going forward. Time to productivity is so much faster with Hasura and “it just works”. Prisma, at least when I was using it, was introducing breaking changes regularly and whipsawing between major design choices.


I mean... this just now is the production-ready release. Everything for the past year on prisma2 has specifically been labeled as unstable. I wouldn't rag on them too hard for having breaking changes while in an alpha/beta phase.


This has been my experience as well. Then Prisma wasn't able to correctly "introspect" the database, threw a generic error, and that was the end.


Prisma is a little bit lower level -- you'd use it within a node backend to get data to and from the database in a typesafe way. Hasura could be used for the same thing, but you'd have to spend some time setting up your own JS client to use within a backend to talk to Hasura. I think Hasura shines a bit more for powering client side apps directly, who just make GraphQL calls right to the "database" service.


I found Prisma 2.0 good for prototyping a service serving a GraphQL-compliant API. One of the features not mentioned here is that it supports a rudimentary form of database query batching (see: https://github.com/prisma/prisma-client-js/issues/153 ), and there seems to be interest in improving it.

This helps one with solving the nplusone problem to some extent without having to maintain code specifically for DataLoader + some orm / custom query code. Comparatively, code via the Prisma Client API is usually straightforward and succinct.


Does it have migrations yet?

I'd like to give Prisma a try, but if there's no sane way to change my schema (part of my daily backend workflow) then it's less interesting, no matter how nice the API is. For now I'll stick with Sequelize.


Today's release unfortunately doesn't include our migration solution Prisma Migrate [1] yet. We totally understand that a lot of people want to get "database access" and "schema migrations" with the same tool/library, that's why we're focusing most of our engineering efforts on Prisma Migrate next and will hopefully be able to release that soon!

However, we do see a lot of folks using third-party migrations (like knex.js or indeed Sequelize) and then still get the benefits of Prisma Client [2] through introspection [3] for the time being. For non-critical applications we also already see lots of users who are trying out Migrate and help us improve it through constant feedback! I'd love to hear your thoughts on the current version so that we can make sure to consider your feedback and ideas for Migrate when building it out over the next few months.

[1] https://www.prisma.io/docs/reference/tools-and-interfaces/pr...

[2] https://www.prisma.io/blog/announcing-prisma-2-n0v98rzc8br1/...

[3] https://www.prisma.io/docs/reference/tools-and-interfaces/in...


Is there a plan for a higher-level migration API rather than just generating + executing raw sql strings?


I’m with the Product team at Prisma. Prisma Migrate (experimental) generates migrations from changes to the Prisma schema. These migrations use an internal DSL that ends up translating to SQL commands for relational DBs. Can you please elaborate a bit more on what you’d expect as a higher-level migration API?


They have, but it's "experimental" at the moment: https://www.prisma.io/docs/reference/tools-and-interfaces/pr...


I'm very glad to see work being done on db bindings, but until migrations reach the level of django/active record, I won't be using postgres + node.js seriously (again).


I'm with the product team at Prisma.

Prisma Migrate is different from ActiveRecord migrations (which are very familiar with) because the DB schema is state-based. The Prisma schema file acts as source of truth and the DB schema will be migrated to match it.

Can you elaborate on what you would perceive as reaching the level of Django/ActiveRecord? I'd be interesting in specific aspects/items.


That sounds great! I'm not super familiar with ActiveRecord, but I use Django migrations regularly. For me the things that stand out are

1. a declarative model - ie. defining the db schema rather than the migrations

2. auto generated migrations with the ability to customize

3. integration with tools for deployment and testing

You probably have a much better idea of the landscape, but reach out to Andrew Godwin [1], he wrote south and then rewrote it to become django migrations.

[1] - https://www.aeracode.org/


no worries, it's their goal :)


Many people don't care for their schema language anyway so they won't miss writing migrations in it. Just do migration like normally with SQL.


I love the idea of a type safe interface to a database, but I'll pass on learning yet another DSL. Seems like inevitably you get to a certain complexity and the DSL just falls apart and you wind up writing SQL regardless. Then you end up with half your queries written in one language (SQL) and the other half in the ORM DSL. SQL isn't that hard and if you are using a SQL database, you can't really escape knowing the concepts behind SQL relationships regardless which is the tricky bit.

So, bring on type safe access, but don't make me learn yet another DSL which only works 70% of the time.


the DSL is only for table management, and pretty similar to GraphQL type definition

The ORM layer is not a DSL but some nicely done JS/TS functions


> The ORM layer is not a DSL but some nicely done JS/TS functions

You are splitting hairs here as far as I'm concerned. You need to learn the an API so you can do 70% of your queries. Then you need to learn SQL so you can do the other 30% of your queries and actually understand how to design a database. The queries that the "nicely done JS/TS functions" are replacing are almost always the simplest, most basic queries. Do you really need a special query language to say `select * from widgets`?

The big problem with every ORM layer is you are essentially learning a disposable language. Every ORM says it's the best way to Query ever, and yet here we are, 5000 ORMs later and SQL is still an essential skill for developers.

I know... "This time it's different!".


The magic is great! Until it stops working in production...


I've been using the beta for the past couple months on a new project along with @nexus/schema to build a GraphQL server. This hits a sweet spot for me where I'm not having to manually duplicate a bunch of information in my GraphQL schema that's easily derivable from my database schema, but I still have the freedom to implement custom resolvers and use Prisma directly (or whatever else) when I need to. It's a good stack for building a GraphQL server around a Postgres db.

The main problems I've run into have been around utilizing standard postgres naming patterns (snake case for tables and fields instead of camelcase) and mapping the names in the prisma schema. Ran into a handful of bugs related to having these mappings that have all been fixed since. It still requires a post-introspect step to add the mappings, but that's not too big of a deal. Ideally the introspection would be able to handle database-specific conventions.

Couple of other things I've run into that already have github issues:

- It would be great if along with the create/connect options on relationships for nested writes there was also an upsert.

- Better transaction support beyond just nested writes would be great and probably a requirement for a lot of apps. Thankfully, my server is relatively simple right now so banking a bit on prisma improving as my app grows in complexity.


Thanks so much for sharing your experiences with Prisma!

> The main problems I've run into have been around utilizing standard postgres naming patterns (snake case for tables and fields instead of camelcase) and mapping the names in the prisma schema. Ran into a handful of bugs related to having these mappings that have all been fixed since.

Better re-introspection flows are indeed very much on our radar and something that we want to tackle soon! Would be great if you could leave a comment with your use case on GitHub [1], so we can make sure to address it properly when planning and prioritizing new features! :)

> Better transaction support beyond just nested writes would be great and probably a requirement for a lot of apps.

Same here! It would be really helpful for us if you could share some details about your use cases for transactions in the feature request [2] so that we can incorporate them in our planning and design of the feature!

[1] https://github.com/prisma/prisma/issues/2425

[2] https://github.com/prisma/prisma/issues/1844


Any to solve the SQL long string problem? I hate having to write another custom SQL migration script beside Prisma migration script.

For your reference: https://github.com/prisma/prisma/discussions/2138


A thread on this from a couple months ago: https://news.ycombinator.com/item?id=22739121


It's funny how suddenly everyone is moving towards types. A couple of years back which was frowned upon. And was seen as anti productive.


True. A big appeal for me with TS is that the inference engine is decent, so doesn't feel too ceremonious, and I can also tap out of the type system at any point and write a bit more gnarly code. Typing just the boundaries can be super helpful sometimes.


If by "a couple" you mean 10 to 15, then yes, that's correct.

Look at the modern type systems we have around and try to see how they are different from what was mainstream by that time.


I am not talking about the state of type system. But, how they were projected as one. Especially when dynamic languages were getting popular.


So with Prisma 2.0 I could replace https://www.npmjs.com/package/schemats + https://sqorn.org/ ?


I have a lot of experience with ORMs (the Django one in particular). I'm having a hard time finding a quick overview of how Prisma's model is different. Are there any good overviews of the similarities and differences?


Daniel from the Prisma team here.

Both Prisma and ORMs abstract away from SQL and let you "think in objects". However, how they do that is different.

With ORMs, you typically map tables to model classes With Prisma, the focus is on queries and structural typing; queries return plain objects that are fully typed based on the query.

For a broader comparison with ORMs, check out the documentation page about why Prisma is not an ORM: https://www.prisma.io/docs/understand-prisma/prisma-in-your-...


Cool, thank you very much!


I think one very common characteristic of most ORMs is that tables are defined in terms of classes (often called models). You then instantiate these classes to work with the model instances for data storage and retrieval.

Prisma takes a fundamentally different approach by generating a database client that returns plain old JS objects. We've written more extensively about this topic in the docs: https://www.prisma.io/docs/understand-prisma/why-prisma


I happened across Prisma recently while looking for a (more) declarative way of managing SQL migrations. It's far from complete, but a nice feature.


i'm not a node dev, but am seeing the activerecord (from rails) inspired ideology behind this. while activerecord certainly has its downsides, its awesome upsides/ideas should definitely be reused




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: