If I'm reading this correctly, Joe uses docker running on the same host as your regular production db, and provisions a snapshot, and a new docker image to mount it. So yes, your production DB data is 'safe' but if your query uses lots of CPU, or lots of Disk IO, then its still going to fight the other images on the system for resources. IE, a Full Table scan of a 2TB table is going to affect the other docker systems running on this same server.
Joe uses Docker and ZFS (or LVM+ext4, alternatively) on a separate machine, where data is transferred, normally, from archives (optionally, being transferred constantly, if there is a small Postgres "sync" server with "restore_command" configured), stored on ZFS, periodically snapshotted and prepared (and at this point, we can anonymize it, see https://gitlab.com/postgres-ai/database-lab/-/blob/master/sc...), and snapshotted again. Such snapshots of prepared PGDATA can be done periodically. And from there, we do thin provisioning, in a few seconds.
All this happens on a fully separated machine, not affecting production nodes by any means.