Look at object and capability machines.. Back before OS's became so homogeneous there were a _LOT_ of ideas that didn't map to the modern concept of a file. Some of these machines still exist. For example the AS400/iSeries doesn't really differentiate between ram/storage with its object storage, which means its a perfect fit for a modern non-volatile RAM machine. The original PalmOS, had as similar concept.
Of course all the rage the last couple years are key/value stores, which in old terminology one might call KCD (key, count, data) or to rearrange it a bit CKD, aka the technology used for persistent disk storage on IBM mainframes.
This is actually one of the things that has gotten a lot easier on the internet the past few years as book scanners have become more common. There now seems to be an effort to preserve old burroghs/whatever manuals online rather than collecting dust in peoples attics.
> We analyze the I/O behavior of iBench, a new collection of productivity and multimedia application workloads. Our analysis reveals a number of differences between iBench and typical file-system workload studies, including the complex organization of modern files, the lack of pure sequential access, the influence of underlying frameworks on I/O patterns, the widespread use of file synchronization and atomic operations, and the prevalence of threads. Our results have strong ramifications for the design of next generation local and cloud-based storage systems.
> The iBench tasks also illustrate that file systems are now being treated as repositories of highly-structured “databases” managed by the applications themselves. In some cases, data is stored in a literal database (e.g, iPhoto uses SQLite), but in most cases, data is organized in complex directory hierarchies or within a single file (e.g., a .doc file is basically a mini-FAT file system). One option is that the file system could become more application-aware, tuned to understand important structures and to better allocate and access these structures on disk. For example, a smarter file system could improve its allocation and prefetching of “files” within a .doc file: seemingly non-sequential patterns in a complex file are easily deconstructed into accesses to metadata followed by streaming sequential access to data.
File systems are just nosql databases, hierarchical key-value blob stores. There are obviously ton of other ways to model databases that could be used. For the other extreme end I think Oracle DB runs quite happily on raw disks, or at least did so at some point.
Of course I'm not sure if parent was meaning files as a way to structure/store data (having that hierarchical blobstore) or as a way to access data (something you `open`, `read`, `seek` etc), as they are slightly different things.
For a more real world example, take a look how mainframes, especially AS400 (edit: meant System/360 successors), managed data. At least afaik they fundamentally work on a more structured level.
Oracle DB's preferred method of data storage is for you to hand it disks for Automatic Storage Management, ASM. It then takes care of replication and storage by itself.
In practice, this might be a little more performant but incurs significant manageability costs. If you're a committed Oracle shop, it's worthwhile. If you just want one or two database servers and you already have preferred storage methods, use those. (Or, more realistically, use PostgreSQL.)
The Newton had a type of object database call soups in place of the file system. It supported queries and frames. The coolest feature was when you removed storage and it still worked with what was still there.