Hacker News new | past | comments | ask | show | jobs | submit login

I think another issue with compressing partial content is that those compressed responses cannot be cached efficiently - it would have to be compressed on-the-fly for every range requested. And while compression is not as computationally expensive as it used to be, it does still add overhead that could be more than the bandwidth overhead of uncompressed data.

There should be a workaround for this using a custom pre-compression scheme, instead of relying on HTTP for the compression. Blocks within the file will have to be compressed separately, and you'll need some kind of index mapping uncompressed block offsets to compressed block offsets.

Unfortunately there doesn't seem to be a common of-the-shelf compression format that does this, and it means that you can't just use a standard SQLite file anymore. But it's definitely possible.




Update: This is an example of a compression format that allows random access to 64KB chunks, and is compatible with gzip:https://manpages.ubuntu.com/manpages/kinetic/en/man1/dictzip...


I see the caching argument only partially. It applies to any ranged request, non just compressed ones, and there are plenty of ways for origin servers to describe what and how to cache content. The origin may explicitly only allow caching when the access patterns are expected to likely repeat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: