Hacker News new | past | comments | ask | show | jobs | submit login

The architecture slides don't show any in-memory read caching of data? I guess there is at least some, but would it be at the disk side or the NIC side? I guess sendfile without direct IO would read from a cache.



Caching is left off for simplicity.

We keep track of popular titles, and try to cache them in RAM, using the normal page cache LRU mechanism. Other titles are marked with SF_NOCACHE and are discarded from RAM ASAP.


How much data ends up being served from RAM? I had the impression that it was negligible and that the page cache was mostly used for file metadata and infrequently accessed data.


It depends. Normally about 10-ish percent. I've seen well over that in the past for super popular titles on their release date.


in which node would that page cache be allocated? In the one where the disk is attached, or where the data is used? Or is this more or less undefined or up to the OS?


This is gone over in the talk. We allocate the page locally to where the data is used. The idea is that we'd prefer the NVME drive to eat any latency for the NUMA bus transfer, and not have the CPU (SW TLS) or NIC (inline HW TLS) stall waiting for a transfer.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: