One way to "solve" the time to visual completion would be to make all the images, but especially the larger images, progressive scan. For very large images, the difference in visual quality between 50% downloaded and 100% downloaded on most devices isn't noticeable, so the page would appear complete in half the time.
Totally. There are a bunch of ways to address the performance issue. As I alluded to at the end of the post there serious technology considerations when preprocessing so much image data.
We're currently looking at whether we can solve use IntersectionObserver for efficient lazy loading of images before the enter the viewport.
If there's a way to tell it not to render until x% downloaded, sure. Otherwise slower connections see the low-q versions for a while and it can disconcerting. Either to some users or some PMs.
OTOH, progressive JPEGs tend to require much more memory to decode. I do not have specific numbers to cite. Only going off of anecdotal usage of image programs over the years (e.g., Java photo uploaders that choked on progressive JPEGs).