Hacker News new | past | comments | ask | show | jobs | submit login

One way to "solve" the time to visual completion would be to make all the images, but especially the larger images, progressive scan. For very large images, the difference in visual quality between 50% downloaded and 100% downloaded on most devices isn't noticeable, so the page would appear complete in half the time.



Totally. There are a bunch of ways to address the performance issue. As I alluded to at the end of the post there serious technology considerations when preprocessing so much image data.

We're currently looking at whether we can solve use IntersectionObserver for efficient lazy loading of images before the enter the viewport.


There's an excellent talk about doing exactly that, showing a working prototype & results measured:

https://www.youtube.com/watch?v=66JINbkBYqw


If there's a way to tell it not to render until x% downloaded, sure. Otherwise slower connections see the low-q versions for a while and it can disconcerting. Either to some users or some PMs.


This is correct. Visually completion will not be achieved until the entirety of the images within the viewport are downloaded.

However progressive jpegs could improve initial paint times. These are dynamic so each page would have it's own unique (although related) profile.


Doesn't that increase the file size though? They're looking at 3G load speeds, so any increase in file size is probably unwelcome.


Progressive JPG almost always smaller than generic one


OTOH, progressive JPEGs tend to require much more memory to decode. I do not have specific numbers to cite. Only going off of anecdotal usage of image programs over the years (e.g., Java photo uploaders that choked on progressive JPEGs).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: