Hacker News new | past | comments | ask | show | jobs | submit login
Wide angle lens distortion correction from lines (srcf.net)
158 points by hugohadfield 3 months ago | hide | past | favorite | 58 comments



I swear I read something similar along the lines (pun intended) of this a couple years back but energy was not the Radon transformation, I forget what exactly it was. The hardest part of using this in production is that there is a lot of hand-tuned values, particularly during the edge detect portion which makes it difficult to scale. It's usually cheaper and easier to calibrate the camera at a mass scale in the factory using "old school methods.



you mean checkerboards?


I have been wanting to do this for my

https://www.kandaovr.com/qoocam-ego

after reading Lenny Lipton's books about stereo cinematography I've been debugging my stereograms and one thing I know is that the lenses on that thing have a little bit of pincushion distortion which means stereo pairs that are supposed to be perfectly aligned vertically aren't quite.

I know DxO makes distortion correct filters for lens/camera pairs and I was sure I could make one by taking pictures of a grid but this gives a definite path to doing it.


The ideas listed in the document are about correcting distortion when the image has already been taken and you can't control the scene.

As you've got the camera in hand, you've got an even simpler option available: You can print a special pattern called a 'ChArUco board' [1] take pictures of it from a few different angles, then you can calculate the camera "intrinsics" (field of view, lens distortion parameters) and "extrinsics" (relative positions of your two cameras) based on those images.

[1] https://docs.opencv.org/3.4/da/d13/tutorial_aruco_calibratio...


You can also use the patterns to generate DNG/RAW lens profiles which allow automatic lens correction in popular apps like Lightroom etc:

Adobe Lens Profile Creator

https://helpx.adobe.com/camera-raw/digital-negative.html


I have difficulty understanding what the transformed image is equivalent to. This makes it feel like the picture was taken at a difference distance and focal length, but[1] it would look different if that were the case because the perspective would be different. Does this have any "physical" interpretation that would make it easier for me to understand? Like, cropping an image is equivalent to changing the focal length; what would this be equivalent to? A type of rectilinear lens?

[1] With the exception maybe for a single plane in focus?


> I have difficulty understanding what the transformed image is equivalent to.

As a non-photographer with zero knowledge about photography, the fixed image, with straight lines, feels much more natural to me.

I'd say it reminds me of 3D games like, say, 3D game simulators?

Are 3D games not reproducing lens deformation more or less correct from a "physics" point of view? I happen to be on vacation atm in an apartment on the beach on the ninth floor with a clear view: what I see is much closer to the "corrected" (not my word but TFA's author's one) version than to the other one.


The author is saying "wide angle lens" but means what people would conventionally call a "fish eye lens." Normally when someone says wide angle, it's still assumed to be a rectilinear image projection, same as what you get with 3D game rendering. And if you're talking about a curvy fisheye projection, you specify that.

The iPhone's ultrawide lens is a good example of a rectilinear projection with lots of example photos available.

It can produce weird feeling images with stuff at the edges looking stretched out, and parallel lines being at significant angles to each other, but it does not make straight lines curved like the effect that the author is removing.

Example ultrawide photo with straight lines from reddit https://www.reddit.com/r/iPhoneography/comments/ena7s5/bosto...


That is how the brain wants to see. When I got a new pair of glasses everything looked very curvy. After a week every line was straight again because the brain learned the new transformation


As an artist, this the transformed image is what I would draw using 1-point perspective. Basically making everything straight lines. It intuitively feels a lot more natural and fits into our mental model of how the human world is shaped (i.e. everything is a rectangle)

https://m.youtube.com/watch?v=qOojGBEsWQw


I’ve done some work on implementing this as a coder, not a mathematician. So, the following description is just how the process looks while you are implementing it :P

Take the original curved image and put it on a super stretchy rubber sheet. Pull all four corners out diagonally until the curves look straight. You have to pull really hard and the corners will be stretched out into thin spikes.

But, no one wants to see an image that’s 80% long, thin spikes with lots of empty space between them. So, go to the center and crop down to the biggest rectangle you can that doesn’t have empty space around the edges.


I would draw an analogy to map projections.

Or, take an image of a soccer ball (the kind with pentagons and hexagons), you can see all of one hemisphere. But it’s a “fisheye” view. If you take the half soccer ball and cut up the shapes and rearrange them on a flat surface, you are adjusting the projection


It sounds like you know this already, but as any portrait photographer would note, changing the focal length is not equivalent to cropping. It's roughly equivalent, at best.

ie, Telephoto lenses bring a different perspective which includes distance compression. It's very apparent when photographing human faces.


If you take a shot with a 35mm and take same shot with a 85mm and then crop the 35mm to the same fov as the 85m the image will look _identical_ (not withstanding lens characteristics etc) the compression you talk about is due to the _distance_ between the subject and then lens changing . You will get the same compression effect if you shoot with a 50mm from 30 feet away …


Changing the focal length doesn't inherently change the perspective, and (resolution and lens aberations aside) is exactly equivalent to cropping.

What changing the focal length does do is (e.g.) make you stand further back, and that changes the perspective, causing distance compression, etc.


True. A change in focal length is exactly equivalent to moving the vanishing points further away/closer to the picture plane. See the vid at the bottom of this page:

https://rmit.instructure.com/courses/87565/pages/perspective...


Ahh, you're right. Thanks.


His dissertation looks very interesting

https://hh409.user.srcf.net/index.html#PhDThesis


This is cool, but couldn't you generate the correction transformation simply from knowing the lens geometry? I assume this is what my phone is doing when I take wide-angle pictures (which don't have any visible distortion)


Depends on the reason why you are doing this transform. If it's just a visual correction filter, then that will work well enough. If you are trying to track camera movement on a series of images and match a 3D model to the footage, then it's not. You want to analyze the actual images the lens is producing and generate the distortion from that. Every lens is different. Different setups with the same lens may produce different distortions. A warm lens behaves different than a cool lens. Change the focus, and the distortion may change (lens breathing). Some lenses exhibit different distortions at different zoom levels.


Yes. Most professional photo editing and management software has built-in functionality or an add-on for lens distortion correction. However it either requires having the original photo, or at least a non-cropped version with the exif data, or some knowledge of what body and lens and focal length was used.

This utility doesn't require the original non-cropped area nor any other information about the picture that was taken. You could scrape a bunch of pictures from Instagram or Facebook and batch process away.


A question for those who know optics: If the angle of incidence is past the critical angle of red do all of the visible spectrum get reflected without any chromatic effects ?

Are there cameras that have a sensors laid out on a curve matching the expected surface on which the image is in focus ?

I wonder why there are no cameras (apart from astronomical telescopes) that use reflection only for imaging. Such a camera would be too bulky to be practical ?



I have an old Nikon 500/8 ; gotta be honest, it's not very good.


Some are not very good, but others are. I currently have 4 different mirror lenses, two are not great but the other two are very good. One of which is an AF mirror lens!

One thing is that they are extremely sensitive to shocks causing mirror misalignment - so if your lens has ever been dropped, it’s probably lower performance than before due to the mirrors being out of wack.


Mine's from 1974 according to a serial number lookup, so who knows what it's been through in the last 50 years.


Thanks for the link. Learned something new.


In the early 2000s I was thinking about a machine vision camera that would use a mirror and a small lens to image a whole room, as seen from a corner. I figured it would take about 50 megapixels to get the performance I wanted and at that time 5 megapixels seemed like a lot.

Today now that is no problem. A few years ago I saw this

https://owllabs.com/products/meeting-owl-3

at work, the fisheye lens on it is more compact than what I had in mind and it has enough pixels to pick out individuals speaking in a conference room.


> Are there cameras that have a sensors laid out on a curve matching the expected surface on which the image is in focus ?

Not a sensor, but some disposable film cameras have a curved film holder to compensate for low quality optics. Some panoramic film cameras do the same.


I had similar thoughts recently because I am working on a catadioptric system for a project at work.

https://www.reddit.com/r/Optics/comments/oimvt0/curved_camer...

https://www.digitalcameraworld.com/news/sonys-new-curved-ima...

It appears that curved sensors maybe exist somewhere in a lab, and have been slightly commericialized, but I didn't see any 'buy now' buttons when I looked.

I didn't dive too deep into it because It's not like I'm going to be changing the sensor in my design at this stage of the game, but it was an idea that a friend suggested when I talked about the limitations of the mirror based system that we're using.

https://techxplore.com/news/2024-07-insect-autonomous-strate...

This link popped up on hackernews a few days ago and I noticed that they were using a mirror in their optical system as well. I haven't had a chance to read beyond that promotional article above so I don't know how they're overcoming the depth of field limitations with this kind of optical set up.


Very interesting. All the best for your project.


This site could really use a mobile version[0]

[0]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_media_q...


Reader mode works alright


It should be noted that this article talks about a pretty niche use case without really spelling it out.

Camera optics are generally designed not to exhibit this kind of distortion. As other commenters note, wide-angle lenses are ground to provide rectilinear projection where horizontal and vertical lines are straight. Further, if a particular lens does exhibit distortion, the usual solution is to measure the effect and construct a reverse mapping that can be applied in software.

There are relatively few situations where you have a distorted image taken with unknown lens, but where you have a regular grid of horizontal and vertical lines for the algorithm to rely on.


> There are relatively few situations where you have a distorted image taken with unknown lens, but where you have a regular grid of horizontal and vertical lines for the algorithm to rely on.

In visual effects distortion correction is required is required before effective camera tracking can take place. It is also required for a matte to fit the footage. In such situations, it is not unknown to be given 'mystery meat' footage which requires distortion correction. You would be surprised how many directors and DOPs take VFX voodoo for granted and would rather save five minutes on set at the cost of two days in post production.


Seems to me that there is _more_ than one solution to this problem of generating a straight-line image out of a curvy-line image.


The last picture reminds me of what photos from my iPhone look like around the edges


in-car racing cameras have very wide FOV. it's not uncommon to have such corrections applied to the video stream. i believe even the ubiquitous go-pro has such a filter.


What is the dofference between this and camera calibration?


You can get a rectilinear lens instead, you know.


Optimising for low distortion means trading off against something else - sharpness, brightness, size, weight etc. Smartphone cameras have become so good because they're very intelligently optimised using a hybrid of hardware and software.

DSLR/mirrorless users still use lens correction (either in-camera or as part of the post-processing pipeline) because even a big, heavy, expensive pro-quality lens is still imperfect in ways that are relatively easy to compensate for in software.

https://www.canon-europe.com/pro/infobank/in-camera-lens-cor...


This isn't about correcting for minor imperfections, but converting the image from the wrong kind of lens.

See https://m.youtube.com/watch?v=6toiNmZ_e4I for the difference.


Sometimes computer vision applications require rectilinear images, but you don’t have a chance to choose the hardware, or it was chosen with other constraints in mind. No reason to dump on someone doing research to rectify an image in a novel way.


Sometimes you can, yes, if you are picking the lens with which a subject will be photographed -- you can get down as low as 9mm on 135-film area, and still buy a relatively rectilinear lens.

Sometimes you can't get a rectilinear lens, though: If I want to shoot wide angle on my phone, curvilinear will have to do.

Sometimes you don't even have a lens, you've just got a photo, and that photo is curvilinear.

Novel ways to adjust for distortion are always nice to have in the toolkit.


Sometimes you don't even have a photo, but rather a synthesized image that looks as if a lens was in use.

Sometimes, you want to have a system that's able to self-correct after you slap a random lens on it as it's working. Or, you're looking through a translucent material, which acts as an ad-hoc lens with unknown parameters, and you want to compensate on the go.

Point being, methods of on-line calibration without use of special calibration setup (like ArUco boards mentioned elsewhere) have a wide range of use cases and are always welcome.


I know very few 35mm format lenses with NO distortion.

The two I know of with the least distortion are actually primes from the 1980s. Nikon began allowing a small amount of distortion in their new prime designs circa 2010, choosing to correct it with an in-camera profile.

It's not as bad as it sounds. Getting rid of that last bit of distortion may require relatively major tradeoffs in other areas like brightness.


There's a number of lenses which prioritize distortion correction because they don't get to have lens profiles. Though even low distortion wide angle lenses generally retain low levels of high order distortion (i.e. straight lines become slightly wavy across the image, instead of having a large amount of low-order distortion, i.e. being simply bent strongly one way or another), see e.g. Laowa Zero-D lenses.

I do actually think the OEM design approach is better overall. It's a lot easier to near-perfectly correct high amounts of low order distortion than it is to make lines with a slight amount of 6th? 8th? order distortion actually straight. Even if the resulting raw image of the OEM lens looks more like a fisheye than a rectilinear lens.


Would one og these be the rectilinear 15mm?

Great lens - and a huge front element…

I used that in the 90’ies to make QTVR with a (really expensive) Kodak DCS 460 - 6MP on a DX sized CCD no display and the large 340MB PCMCIA disk ;-)


Zeiss cinema lenses (in particular master primes) have the least distortion I’ve come across


and a spherical cow while you're at it.


They're commonly available, not something exotic.


Are they? How do you store spherical cows? Do you need to roll them out to pasture and then back into the barn, or do they roll on their own?


I mean you have to pick between fisheye and rectilinear lenses when you buy a wide angle lens. This is completely unnecessary, you only need to pick the lens that you actually want.

Why does everybody doing as if I propose something outrageous or impossible?


> This is completely unnecessary, you only need to pick the lens that you actually want.

It sounds like you are thinking only in the context of photography. In robotics and machine vision applications you often choose the fisheye lens because they are cheaper than the rectilinear lenses with the same FOV. (if a rectilinear lens is even available in the form factor and FOV you need.)

So what people do in those situations is that they get a crappier lens and they calibrate it so the algorithms know how much to correct for its crappyness. That is where this kind of calibration really shines.


I think the down-voting was harsh, it usually gets corrected in no time.

That said people here are interested in the different ways of solving a problem, if not for anything else but to tickle one intellectually. So yes rectilinear lenses exist, but that does not mean that computational methods are uninteresting or useless. For one thing, one need not purchase different kinds of lenses.


These aren’t even fisheye examples.


Even the best and most expensive professional lenses have some barrel and or pincushion distortion.

It’s unreasonable to expect never to need any correction, and it’s actually a really interesting, non-trivial problem, to tinker with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: