Here is the simplest way I can put it: When you delete a file in NT, any NtCreateFile() on its name will fail with STATUS_DELETE_PENDING until the last handle is closed.[1] Unix will remove the name for this case and the name is re-usable for any number of unrelated future files.
[1] Note that is not the same as your "must be reachable via some path". It is literally inaccessible by name after delete. Try to access by name and you get STATUS_DELETE_PENDING. This is unrelated to the other misfeature of being able to block deletes by not including FILE_SHARE_DELETE.
"Reachable" doesn't mean "openable". Reachable just means there is a path that the system identifies the file with. There are files you cannot open but which are nevertheless reachable by path. Lots of reasons can exist for this and a pending delete is just one of them. Others can include having wrong permissions or being special files (e.g. hiberfil.sys or even $MFTMirr).
I would be kind of surprised if "the system" cares much about the name of a delete pending file. NT philosophy is to discard the name as soon as possible and work with handles. I was under the impression that ntfs.sys only has this behavior because older filesystems led everybody to expect it.
Well if you look at the scenario you described, I don't believe the parent folder can be deleted while the child is pending deletion. And if the system crashes, I'd expect the file to be there (but haven't tested). So the path components do have to be kept around somewhere...
It's true that NT won't let you remove a directory if a child has a handle open. But I suspect you are getting the reasoning backwards. The directory is not empty as long as that delete pending file is there. Remove this ill-conceived implementation detail (and it is that) then this and other problems go away.
There is also an API that retrieves a filename from a handle. I don't think it guarantees the name be usable though.
It's easy to imagine a system that works the way I would have it, because it exists: Unix. You can unlink and keep descriptors open. NT is very close to being there too, except for these goofy quirks which are kind of artificial.
this has led to some interesting observations for me in linux when I've had really large log files that were still in use and were "deleted" but the file was still in use. (I think cat /dev/nul > file will do this). Tools like du now cannot find where the disk usage actually is. Only on restart of the app does usage show correctly again. Kinda hard to troubleshoot if you were not aware this was what happened.
I agree that this is a drawback or a common gotcha for the Unix behavior which would be more user visible with the NT behavior, but to anyone advocating the Windows way I would ask: is it worth getting this fringe detail "right" by making every unlink(x); open(x, O_CREAT ...); into a risky behavior that may randomly fail depending on what another process is doing to x? On Windows, I have seen this type of pattern, a common one because most people aren't aware of this corner case, be the cause of seemingly random failures that would be rather inexplicable to most programmers. (Often the program holding x open is an AV product scanning it for viruses, meaning that any given user system might have a flurry of race condition causing filesystem activities that may or may not conflict with your process.)
[1] Note that is not the same as your "must be reachable via some path". It is literally inaccessible by name after delete. Try to access by name and you get STATUS_DELETE_PENDING. This is unrelated to the other misfeature of being able to block deletes by not including FILE_SHARE_DELETE.