Hacker News new | past | comments | ask | show | jobs | submit login

Applications for which filesystem-like access is important (i.e. requiring lots of POSIX file I/O system calls, e.g. read(2)/write(2)/lseek(2)) but latency is unimportant seem pretty niche to me. If you don't need any of the POSIX syscalls, it's not that much more difficult to work with bucket URLs vs. file paths — the general format is the same, i.e. slash-delimited file/directory hierarchies.



Not everything is a webserver. There's a lot of software out there that wouldn't expect files to exist anywhere else besides on disk, and it's not worth fetching them all from cloud storage before you begin working on the data. It's easier just to GCSFuse a bucket to a VM and let the user do what they will. Works great for ad-hoc analysis of poorly or unstructured data.


And for your use case, the latency is not a concern? I suppose that would be true if you were mostly dealing with really big files and only cared about reading large contiguous chunks of them, but I would consider this a fairly niche application.

In my use case, taking ~1 second each time to `ls` a directory, `stat` a file, or `lseek` within a file was simply unacceptable. This was on a cloud VM, so the latency would be at its absolute minimum.


In VFX a single texture can have terabytes..




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: