Some file systems like ZFS, btrfs support checksum on block level. However file level checksum is not trivial with random write file system. Imagine 1 byte is changed in 20GB file - that would require full file scan to update the checksum.
Recalculating a file checksum from all block checksums could be a solution however far from any standard.
Wouldn't a merkle tree, like torrents or dat use, avoid this? You'd only have to generate a hash of the changed chunk, then it's really low cost to update the root hash.