Hacker News new | past | comments | ask | show | jobs | submit login

That's an interesting claim. I make range requests to 100GB+ files (genomics) all the time for work and it works great. I've never considered total file size as directly related to latency in this respect, assuming you have some sort of an index of course.



You can test this claim directly against a AWS S3 bucket.

First 100KB of a 100GB+ file:

curl -H "Range: bytes=0-100000" https://overturemaps-tiles-us-west-2-beta.s3.amazonaws.com/2... --output tmp -w "%{time_total}"

First 100KB at the 100GB mark:

curl -H "Range: bytes=100000000000-100000100000" https://overturemaps-tiles-us-west-2-beta.s3.amazonaws.com/2... --output tmp -w "%{time_total}"


Here the requests are really really small, on average 405 bytes each. I guess in your genomics work you are making larger requests, so probably it's not so much of an issue.

BTW, we are discussing latency with bdon in this issue, it seems to be specific to Cloudflare: https://github.com/hyperknot/openfreemap/issues/16


I just tried @bmon's curl examples above with 100 byte requests. Similar results. I think the Cloudflare explanation is more likely.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: