As best as they can, and by a lot of people running release candidates on test systems. Unfortunately there are basically an infinite (not really, just incredibly impossibly large) number of configurations of software and hardware that can lead to a bug like this not triggering so it's not always possible to catch this ahead of time.
But if anyone is running critical data, they shouldn't be using a bleeding-edge kernel in the first damn place. It certainly "sucks" for the users that found it, but again, that's the risk you play.
And, this bug seems to have been found... a mere week after articles even reported the release of Kernel 4.14:
Which is a pretty amazing turn-around time for "crowd-sourced" discovery of a bug. And according to the bug tracker, it looks like they patched it the same day.
So as bad as this is (and I feel for those who lost data, I really do), the solution is simply to not run bleeding-edge kernels on release day. The very fact they have a minor version number 4.14.1 (.1) implies things get released that need fixed, and if you have critical data you should be more patient.
It's been the official position for quite a while that kernel.org releases - even "stable" ones - are to be considered bleeding-edge, and for it to be the responsibility of distros to do the necessary cooking on behalf of users to avoid unleashing something raw on them. Long gone are the days of regular users being trusted (and being able to trust) to just grab the latest linux-2.0.36 tarball from there and upgrade by compiling themselves.
This fix doesn't seem to have made it into 4.14.1 unfortunately, and the Gentoo bug report came almost a week after this issue was identified and a patch posted to the bcache mailing list.
If you have really critical data, I expect you to run the latest possible kernel on part of your cluster. That allows you to see potential regressions and also to test your backup/recovery solution in case something that bad happens.