Why would this be an indictment of any specific database technology? If your disk fails and corrupts the filesystem, you're toast, regardless of what database you are using.
Working with critical infrastructure and lack of true in depth oversight it wouldn’t surprise me DR plans were not ever executed or exercised in a meaningful manner.
Yep and if you ship WAL transaction logs to standby databases/replicas, corrupt blocks or lost writes in the primary database won't be propagated to the standbys (unlike with OS filesystem or storage-level replication).
Neither checks the checksum on every read as that would be performance-prohibitive. So "bad data on drive -> db does something with corrupted data and saves corrupted transformation back to disk" is very much possible, just extremely unlikely.
But they said nothing about it being bad drive, just corrupted data file, which very well might be software bug or operator error
RAID does not really protect you from bit rot that tends to happen from time to time.
ZFS might because it checksums the blocks.
But if the corruption happens in memory and then it is transferred to disk and replicated, then from a disk perspective the data was valid.
journal databases are specifically designed to avoid catastrophic corruption in the event of disk failure. the corrupt pages should be detected and reported by the database will function fine without them
If you mean journaling file systems, no. They prevent data corruption in the case of system crash or power outage.
That's different from filesystems that do checksumming (zfs, btrfs). Those can detect corruption.
In any case, if you use a database it handles these things by itself (see ACID). However I don't believe they can necessarily detect disk corruption in all cases (like checksumming file systems).
Well, for example, MySQL/MariaDB using utf8 tables will instantly go down if someone inserts a single multibyte emoji character, and the only way out is to recreate all tables as utf8mb4 and reimport all data.
MySQL historically isn't very good about blocking bad data. Sometimes it would silently truncate strings to fit the column type, for example. It's getting better as time goes on, though.
I have had customer production sites go down due to this issue when emojis first arrived. It was a common issue in 2015. I would hope it is fixed by now!
Having dealt with utf8mb4 data being inserted into the utf8mb3 columns many many times in the past, I've never had a table "instantly go down". You either get silent truncation or a refusal to insert the data.
In MySQL the `utf8` character set is originally an alias for `utf8mb3`. The alias is deprecated as of 8.0 and will eventually be switched to mean `utf8mb4` instead. The `utf8mb3` charset means it's UTF8 encoded data, but only supports up to 3 bytes per character, instead of the full 4 bytes needed.
Imagine you have one node which is running as a replica of another and it takes the backups. Well, let’s pretend it is backing up the corrupted data once in a while and it happened to overwrite their cold backup. They could have any number of databases and still had this failure. It’s more their methodology for taking backups. They should have many points in time to choose from to rebuild their database. They should be testing their databases before backing them up blindly.
> Speculation: corrupting input, either international or North American? E.G. UTF-8, SQL escape, CSV quoting.
I read it as filesystem corruption from a bad disk, coupled with redundancy that doesn't actually work.