Usually the whole row is copied, updated, and then appended to a different file on disk. After some time a job runs that removes the unused rows by deleting them for good, thus rewriting files and compacting the database. This has lots of advantages, on a small scale resilience against power outages (journaling), and on a large scale you can e.g sync a second DB / backup by just streaming appends.
Simplifiying a lot here of course. E.g if your DB uses MVCC (multiversion concurrency control) old values cannot be removed until the last transaction is done seeing them.
Of course if you write your own storage engine you can do whatever you want. I wrote one for MariaDB a few years ago and you basically just have to implement the minimally required interfaces for it to work.
DBs traditionally have these technical details figured out pretty well, and the line between what is part of the DB and what is done by the OS can be blurry at times.
Simplifiying a lot here of course. E.g if your DB uses MVCC (multiversion concurrency control) old values cannot be removed until the last transaction is done seeing them.
Of course if you write your own storage engine you can do whatever you want. I wrote one for MariaDB a few years ago and you basically just have to implement the minimally required interfaces for it to work.
DBs traditionally have these technical details figured out pretty well, and the line between what is part of the DB and what is done by the OS can be blurry at times.