Hacker News new | past | comments | ask | show | jobs | submit login

It also usually forces your design towards the entities themselves rather than the specific way they’re stored, which positions you better for switching to a completely different storage system in the future if, for instance, it’s becoming too slow or expensive to maintain everything in a traditional big name RDBMS.



> forces your design towards the entities themselves

I agree that it's very important to not let the physical schema leak into the rest of the system, and to have a strong conceptual model (aka entities and relations). This has been well understood for almost half a century: https://en.wikipedia.org/wiki/Three-schema_approach

But I don't think ORMs are in any special position to help with this. They typically introduce so much other confusion that they tend to divert attention from designing both a good physical schema and a good conceptual model, and maintain a sensible mapping between the two. This can be done with RDBMS views for example, with a fraction of the overhead of an ORM. Most ORM-based code bases I've seen leak tons of db-level details.

> switching to a completely different storage system in the future

Designing for this eventuality is not healthy IMO. If you get there, it will be a so-called "good problem to have" and you will have to deal with whatever unique challenges you face at that level. We might as well be writing code with the possibility of "switching to a completely different programming language in the future" in mind. Yes, clean, modular code will help, but beyond that, not committing to the capabilities of the tools you have chosen will harm you system.


I disagree with pretty much everything you said. :)

I think there are plenty of bad ORMs and there are plenty of ways to use the good ones in a bad way, but that doesn’t mean that they aren’t providing the value I mentioned. For instance Entity Framework Core with code-first migrations has you designing the data models themselves, then wiring up relationships and other metadata (indexes, keys, etc.) in the DB context itself - your actual entities are completely portable and have nothing to do with the db itself outside of being used by it.

And sure, needing to switch to another storage system may be a good problem to have... that doesn’t mean you should explicitly tie all of your code to one particular RDBMS. If a user is a user is a user, it shouldn’t matter to anything else in your codebase how or where it is stored, it should still be the same entity. Moving those users from your SQL Server to Mongo or to a third party like Auth0 or an Azure/AWS/etc. federated directory service doesn’t change the fact that every user has an ID, an email, a name, etc.

Code for today, but design for tomorrow.


It's well-known to be a topic that splits opinion, so I'm not surprised we disagree :) To me, "designing the data model", "wiring up relationships", etc doesn't require an ORM. On the other hand, I do agree it's good to have some tooling around it and that's something many more bare-bones frameworks (ORM or not) are lacking.

I don't hear people talk about "coding for the web, but design so that you can easily switch to deploy as a Windows desktop app." Or "write it in Python, but in such a way that we can easily swap to OCaml." It seems to me databases are uniquely treated this way, as some kind of disposable, simple piece of side equipment. Again, modular code will always be easier to migrate, but I prefer to take full advantage of db capabilities, as it results in much less code and frees up time and mental space to focus on a good conceptual model and physical schema, among other things.

I've never used EF, so I might not see what you are seeing.


> It seems to me databases are uniquely treated this way, as some kind of disposable, simple piece of side equipment

This is exactly right - lots of people are still cargo-culting rules of thumb that no longer make any sense.

This was an artifact of the last generation's commercial DB market. Open source DBs weren't "there" yet; a combination of real limitations and risk-conservatism kept companies shoveling huge amounts of money at vendors for features and stability now provided by `apt-get install postgresql-server`.

If you just lit seven figures on fire for a database license, you're not hungry to do it again, so you wanted all your software to be compatible with whichever vendor you just locked yourself in to. And certain DB vendors are very well known for brass-knuckle negotiation; if you could credibly threaten to migrate to $competition instead of upgrading, it was one of the few actually useful negotiating levers available.

Today, open source DBs are better than the commercial ones in many situations, certainly not worse in general use, and the costs of running a bunch of different ones are far lower. Not to mention, the best way to win a software audit is to run zero instances of something.


Very useful historical perspective, thanks! Confirms what I had pieced together, that DBs used to be a big liability for organizations, with a special clan (DBAs) of people gatekeeping and introducing patterns that programmers found infuriating. Hence the hatred towards stored procedures, layered schemas, and databases in general. It's probably important to keep stressing, as you do, how different things are now. It's only been a fews years that Postgres has had row level security for example.


DBAs still have their place. In my shop, we have more DBAs than infrastructure people.

When you have a small team working on a given tool that only really needs to manage its own data, it really doesn't matter. But some point, you do need expert gatekeepers to tell engineers when they're Doing It Wrong when there are many heterogenous clients accessing large datastores for different purposes, complex audit requirements, etc.


Yes specialization is often useful. But the divide between developers and DBAs seems to have been similar to the dev/ops divide. Probably still is in many places. There is always a need for seniors or specialists to guide work, I'm not against that. But something like DevOps for RDBMS is needed. DevDat?


The GP has no sense of cost, designing around being DB agnostic is costly and, those who really need that flexibility are in the 1%.


Haven't heard specifically of the terminology "three schema approach", but the idea fits with other notions I've heard about: https://www.martinfowler.com/bliki/BoundedContext.html and Clean Architecture: https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-a...


The number of teams who design a data model & ORM layer "just to make it easy to move later on": Lots

The number of teams who eventually move to a different data store? Almost zero.

Getting "locked in" to a database is a non-issue. In fact you should get locked into a database system, provided you picked a good one to start with. Most teams never even scratch the surface of what a powerful DB like Postgres can do for them and it breaks my heart every time.


which positions you better for switching to a completely different storage system in the future if

YAGNI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: