Hacker News new | past | comments | ask | show | jobs | submit login

> Couldn't this pretty easily be solved at the file system level?

It will not solve a problem that does not even exist in the first place, but will rather badly break the semantics of the UNIX file system precisely at the file system level.

> Just store a back pointer from a file to each of its names.

UNIX file systems do not have files in the conventional sense. They have disk block allocations referenced to by an inode and one or more directory entries pointing back to a specific block allocation via the associated inode. This makes hard links easily possible and very cheap. It is a one to many relationship (one block allocation to many directory entries), and turning it into a many to many relationship, with each directory entry pointing to every single possible permutation of other directory entries across the entire file system a nightmare in every imaginable way.

It is even possible to zero directory entries pointing to an inode (if you poke around with the file system debugger, you can manually delete the last remaining directory entry without releasing allocated blocks into the disk block pool but the next fsck run will reclaim them anyway).




> It is even possible to zero directory entries pointing to an inode.

Historically, fsck would link such anonymous inodes into lost+found using their inode number as their name in the lost+found directory, but I admit having no idea whether this still applies to modern journaled file systems.


File system journals have reduced the likelihood of unlinked inodes ending up in /lost+found but have not eliminated it completely. There is still a non-zero chance a journal corruption as well during a unexpected shutdown or complete power loss during the journal update and something turning up after a full fsck run later.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: