Hacker News new | past | comments | ask | show | jobs | submit login

Fractional scaling is just broken by design. If you have combined vector/bitmap graphics (like most web pages) designed for 'canonical' DPI, you can either break proportions between vectors and bitmaps (rendering vectors with proper DPI, while for bitmaps use nearest integer scale), or render everything with proper scale and introduce artifacts like blurring of sharp edges to bitmaps.



Why is that broken by design? You'll never see content that's in integer scale of its original source anyway. e.g., many Sony cameras have a 6K sensor but downscale (not binning) the image to 4K.

Android btw uses the same type of scaling – you'll just get slightly blurry edges, but that's not much of an issue.

It's much better to have slight blurring around the few rare edges where legacy bitmaps are used.

The alternatives are

1) full-screen blurring, broken gamma correction and wasted performance (e.g. rendering at 2x to display at 1x) of full-screen scaling

2) content that's not the correct scale, making it too small or too large to work with.

I'd much rather have a situation where the 1% of content that uses legacy bitmaps is blurry vs EVERYTHING being blurry.


And yet browsers support a whole range of fractional scaling, often letting you pick individual percentage points. Same with word processors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: