Hacker News new | past | comments | ask | show | jobs | submit login

Here's an image, displayed at "normal" resolution. One pixel of source image goes to one pixel on the display.

Now we zoom. We're displaying a smaller part of the source image, but putting it on the same number of destination pixels. So where do the extra pixels come from? There are (at least) two possible answers:

1. We repeat source pixels across multiple display pixels. This leads to aliasing - to blocky stair-steps in the displayed image.

2. We make up values for the extra display pixels that were not in the source image. This is done by interpolation, not just by random guessing. Bicubic interpolation is pretty good. But still, the program is in fact "making up" values for the new pixels.




Even simpler: a 10x10 image is taken and displayed on a 10x10 screen. The upper right 3x3 pixels are zoomed in on to fill the 10x10 screen.

Now each 3 pixels are being shown on 10 pixels. Each pixel is being spread across 3 1/3 display pixels. How does that work?


For each pixel on the display, you derive a (not necessarily integer) pixel coordinate on the original image. The top-left display pixel may be at (0, 0) on the image. The next pixel to the right is at (0, 0.333) on the image.

So say you're dealing with the pixel at (0, 1.333). You take the pixels at (0, 0), (0, 1), (0, 2), and (0, 3), and you run a cubic interpolation to find the value at 1.333. (You do this three times, for red, green, and blue).

If you're at non-integer coordinates in both directions, then you run an interpolation that is cubic in both directions (that is, a bi-cubic interpolation).


True, but you’ve just introduced new colors!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: