I swear I read something similar along the lines (pun intended) of this a couple years back but energy was not the Radon transformation, I forget what exactly it was. The hardest part of using this in production is that there is a lot of hand-tuned values, particularly during the edge detect portion which makes it difficult to scale. It's usually cheaper and easier to calibrate the camera at a mass scale in the factory using "old school methods.
after reading Lenny Lipton's books about stereo cinematography I've been debugging my stereograms and one thing I know is that the lenses on that thing have a little bit of pincushion distortion which means stereo pairs that are supposed to be perfectly aligned vertically aren't quite.
I know DxO makes distortion correct filters for lens/camera pairs and I was sure I could make one by taking pictures of a grid but this gives a definite path to doing it.
The ideas listed in the document are about correcting distortion when the image has already been taken and you can't control the scene.
As you've got the camera in hand, you've got an even simpler option available: You can print a special pattern called a 'ChArUco board' [1] take pictures of it from a few different angles, then you can calculate the camera "intrinsics" (field of view, lens distortion parameters) and "extrinsics" (relative positions of your two cameras) based on those images.
I have difficulty understanding what the transformed image is equivalent to. This makes it feel like the picture was taken at a difference distance and focal length, but[1] it would look different if that were the case because the perspective would be different. Does this have any "physical" interpretation that would make it easier for me to understand? Like, cropping an image is equivalent to changing the focal length; what would this be equivalent to? A type of rectilinear lens?
[1] With the exception maybe for a single plane in focus?
> I have difficulty understanding what the transformed image is equivalent to.
As a non-photographer with zero knowledge about photography, the fixed image, with straight lines, feels much more natural to me.
I'd say it reminds me of 3D games like, say, 3D game simulators?
Are 3D games not reproducing lens deformation more or less correct from a "physics" point of view? I happen to be on vacation atm in an apartment on the beach on the ninth floor with a clear view: what I see is much closer to the "corrected" (not my word but TFA's author's one) version than to the other one.
That is how the brain wants to see. When I got a new pair of glasses everything looked very curvy. After a week every line was straight again because the brain learned the new transformation
As an artist, this the transformed image is what I would draw using 1-point perspective. Basically making everything straight lines. It intuitively feels a lot more natural and fits into our mental model of how the human world is shaped (i.e. everything is a rectangle)
I’ve done some work on implementing this as a coder, not a mathematician. So, the following description is just how the process looks while you are implementing it :P
Take the original curved image and put it on a super stretchy rubber sheet. Pull all four corners out diagonally until the curves look straight. You have to pull really hard and the corners will be stretched out into thin spikes.
But, no one wants to see an image that’s 80% long, thin spikes with lots of empty space between them. So, go to the center and crop down to the biggest rectangle you can that doesn’t have empty space around the edges.
Or, take an image of a soccer ball (the kind with pentagons and hexagons), you can see all of one hemisphere. But it’s a “fisheye” view. If you take the half soccer ball and cut up the shapes and rearrange them on a flat surface, you are adjusting the projection
It sounds like you know this already, but as any portrait photographer would note, changing the focal length is not equivalent to cropping. It's roughly equivalent, at best.
ie, Telephoto lenses bring a different perspective which includes distance compression. It's very apparent when photographing human faces.
This is cool, but couldn't you generate the correction transformation simply from knowing the lens geometry? I assume this is what my phone is doing when I take wide-angle pictures (which don't have any visible distortion)
Depends on the reason why you are doing this transform. If it's just a visual correction filter, then that will work well enough. If you are trying to track camera movement on a series of images and match a 3D model to the footage, then it's not. You want to analyze the actual images the lens is producing and generate the distortion from that.
Every lens is different. Different setups with the same lens may produce different distortions. A warm lens behaves different than a cool lens. Change the focus, and the distortion may change (lens breathing). Some lenses exhibit different distortions at different zoom levels.
Yes. Most professional photo editing and management software has built-in functionality or an add-on for lens distortion correction. However it either requires having the original photo, or at least a non-cropped version with the exif data, or some knowledge of what body and lens and focal length was used.
This utility doesn't require the original non-cropped area nor any other information about the picture that was taken. You could scrape a bunch of pictures from Instagram or Facebook and batch process away.
A question for those who know optics: If the angle of incidence is past the critical angle of red do all of the visible spectrum get reflected without any chromatic effects ?
Are there cameras that have a sensors laid out on a curve matching the expected surface on which the image is in focus ?
I wonder why there are no cameras (apart from astronomical telescopes) that use reflection only for imaging. Such a camera would be too bulky to be practical ?
In the early 2000s I was thinking about a machine vision camera that would use a mirror and a small lens to image a whole room, as seen from a corner. I figured it would take about 50 megapixels to get the performance I wanted and at that time 5 megapixels seemed like a lot.
Today now that is no problem. A few years ago I saw this
at work, the fisheye lens on it is more compact than what I had in mind and it has enough pixels to pick out individuals speaking in a conference room.
> Are there cameras that have a sensors laid out on a curve matching the expected surface on which the image is in focus ?
Not a sensor, but some disposable film cameras have a curved film holder to compensate for low quality optics. Some panoramic film cameras do the same.
It appears that curved sensors maybe exist somewhere in a lab, and have been slightly commericialized, but I didn't see any 'buy now' buttons when I looked.
I didn't dive too deep into it because It's not like I'm going to be changing the sensor in my design at this stage of the game, but it was an idea that a friend suggested when I talked about the limitations of the mirror based system that we're using.
This link popped up on hackernews a few days ago and I noticed that they were using a mirror in their optical system as well. I haven't had a chance to read beyond that promotional article above so I don't know how they're overcoming the depth of field limitations with this kind of optical set up.
It should be noted that this article talks about a pretty niche use case without really spelling it out.
Camera optics are generally designed not to exhibit this kind of distortion. As other commenters note, wide-angle lenses are ground to provide rectilinear projection where horizontal and vertical lines are straight. Further, if a particular lens does exhibit distortion, the usual solution is to measure the effect and construct a reverse mapping that can be applied in software.
There are relatively few situations where you have a distorted image taken with unknown lens, but where you have a regular grid of horizontal and vertical lines for the algorithm to rely on.
> There are relatively few situations where you have a distorted image taken with unknown lens, but where you have a regular grid of horizontal and vertical lines for the algorithm to rely on.
In visual effects distortion correction is required is required before effective camera tracking can take place. It is also required for a matte to fit the footage. In such situations, it is not unknown to be given 'mystery meat' footage which requires distortion correction. You would be surprised how many directors and DOPs take VFX voodoo for granted and would rather save five minutes on set at the cost of two days in post production.
in-car racing cameras have very wide FOV. it's not uncommon to have such corrections applied to the video stream. i believe even the ubiquitous go-pro has such a filter.
Optimising for low distortion means trading off against something else - sharpness, brightness, size, weight etc. Smartphone cameras have become so good because they're very intelligently optimised using a hybrid of hardware and software.
DSLR/mirrorless users still use lens correction (either in-camera or as part of the post-processing pipeline) because even a big, heavy, expensive pro-quality lens is still imperfect in ways that are relatively easy to compensate for in software.
Sometimes computer vision applications require rectilinear images, but you don’t have a chance to choose the hardware, or it was chosen with other constraints in mind. No reason to dump on someone doing research to rectify an image in a novel way.
Sometimes you can, yes, if you are picking the lens with which a subject will be photographed -- you can get down as low as 9mm on 135-film area, and still buy a relatively rectilinear lens.
Sometimes you can't get a rectilinear lens, though: If I want to shoot wide angle on my phone, curvilinear will have to do.
Sometimes you don't even have a lens, you've just got a photo, and that photo is curvilinear.
Novel ways to adjust for distortion are always nice to have in the toolkit.
I know very few 35mm format lenses with NO distortion.
The two I know of with the least distortion are actually primes from the 1980s. Nikon began allowing a small amount of distortion in their new prime designs circa 2010, choosing to correct it with an in-camera profile.
It's not as bad as it sounds. Getting rid of that last bit of distortion may require relatively major tradeoffs in other areas like brightness.
syntaxing|1 year ago
dekhn|1 year ago
brcmthrowaway|1 year ago
PaulHoule|1 year ago
https://www.kandaovr.com/qoocam-ego
after reading Lenny Lipton's books about stereo cinematography I've been debugging my stereograms and one thing I know is that the lenses on that thing have a little bit of pincushion distortion which means stereo pairs that are supposed to be perfectly aligned vertically aren't quite.
I know DxO makes distortion correct filters for lens/camera pairs and I was sure I could make one by taking pictures of a grid but this gives a definite path to doing it.
michaelt|1 year ago
As you've got the camera in hand, you've got an even simpler option available: You can print a special pattern called a 'ChArUco board' [1] take pictures of it from a few different angles, then you can calculate the camera "intrinsics" (field of view, lens distortion parameters) and "extrinsics" (relative positions of your two cameras) based on those images.
[1] https://docs.opencv.org/3.4/da/d13/tutorial_aruco_calibratio...
danilor|1 year ago
[1] With the exception maybe for a single plane in focus?
TacticalCoder|1 year ago
As a non-photographer with zero knowledge about photography, the fixed image, with straight lines, feels much more natural to me.
I'd say it reminds me of 3D games like, say, 3D game simulators?
Are 3D games not reproducing lens deformation more or less correct from a "physics" point of view? I happen to be on vacation atm in an apartment on the beach on the ninth floor with a clear view: what I see is much closer to the "corrected" (not my word but TFA's author's one) version than to the other one.
antman|1 year ago
markerz|1 year ago
https://m.youtube.com/watch?v=qOojGBEsWQw
corysama|1 year ago
Take the original curved image and put it on a super stretchy rubber sheet. Pull all four corners out diagonally until the curves look straight. You have to pull really hard and the corners will be stretched out into thin spikes.
But, no one wants to see an image that’s 80% long, thin spikes with lots of empty space between them. So, go to the center and crop down to the biggest rectangle you can that doesn’t have empty space around the edges.
hammock|1 year ago
Or, take an image of a soccer ball (the kind with pentagons and hexagons), you can see all of one hemisphere. But it’s a “fisheye” view. If you take the half soccer ball and cut up the shapes and rearrange them on a flat surface, you are adjusting the projection
oasisbob|1 year ago
ie, Telephoto lenses bring a different perspective which includes distance compression. It's very apparent when photographing human faces.
srean|1 year ago
https://hh409.user.srcf.net/index.html#PhDThesis
emtel|1 year ago
harywilke|1 year ago
tadbit|1 year ago
This utility doesn't require the original non-cropped area nor any other information about the picture that was taken. You could scrape a bunch of pictures from Instagram or Facebook and batch process away.
srean|1 year ago
Are there cameras that have a sensors laid out on a curve matching the expected surface on which the image is in focus ?
I wonder why there are no cameras (apart from astronomical telescopes) that use reflection only for imaging. Such a camera would be too bulky to be practical ?
zokier|1 year ago
PaulHoule|1 year ago
Today now that is no problem. A few years ago I saw this
https://owllabs.com/products/meeting-owl-3
at work, the fisheye lens on it is more compact than what I had in mind and it has enough pixels to pick out individuals speaking in a conference room.
RobotToaster|1 year ago
Not a sensor, but some disposable film cameras have a curved film holder to compensate for low quality optics. Some panoramic film cameras do the same.
Teever|1 year ago
https://www.reddit.com/r/Optics/comments/oimvt0/curved_camer...
https://www.digitalcameraworld.com/news/sonys-new-curved-ima...
It appears that curved sensors maybe exist somewhere in a lab, and have been slightly commericialized, but I didn't see any 'buy now' buttons when I looked.
I didn't dive too deep into it because It's not like I'm going to be changing the sensor in my design at this stage of the game, but it was an idea that a friend suggested when I talked about the limitations of the mirror based system that we're using.
https://techxplore.com/news/2024-07-insect-autonomous-strate...
This link popped up on hackernews a few days ago and I noticed that they were using a mirror in their optical system as well. I haven't had a chance to read beyond that promotional article above so I don't know how they're overcoming the depth of field limitations with this kind of optical set up.
lionkor|1 year ago
[0]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_media_q...
dwighttk|1 year ago
doe_eyes|1 year ago
Camera optics are generally designed not to exhibit this kind of distortion. As other commenters note, wide-angle lenses are ground to provide rectilinear projection where horizontal and vertical lines are straight. Further, if a particular lens does exhibit distortion, the usual solution is to measure the effect and construct a reverse mapping that can be applied in software.
There are relatively few situations where you have a distorted image taken with unknown lens, but where you have a regular grid of horizontal and vertical lines for the algorithm to rely on.
Daub|1 year ago
In visual effects distortion correction is required is required before effective camera tracking can take place. It is also required for a matte to fit the footage. In such situations, it is not unknown to be given 'mystery meat' footage which requires distortion correction. You would be surprised how many directors and DOPs take VFX voodoo for granted and would rather save five minutes on set at the cost of two days in post production.
unknown|1 year ago
[deleted]
amelius|1 year ago
philsnow|1 year ago
jiveturkey|1 year ago
brcmthrowaway|1 year ago
Anotheroneagain|1 year ago
jdietrich|1 year ago
DSLR/mirrorless users still use lens correction (either in-camera or as part of the post-processing pipeline) because even a big, heavy, expensive pro-quality lens is still imperfect in ways that are relatively easy to compensate for in software.
https://www.canon-europe.com/pro/infobank/in-camera-lens-cor...
eloisius|1 year ago
hug|1 year ago
Sometimes you can't get a rectilinear lens, though: If I want to shoot wide angle on my phone, curvilinear will have to do.
Sometimes you don't even have a lens, you've just got a photo, and that photo is curvilinear.
Novel ways to adjust for distortion are always nice to have in the toolkit.
skhr0680|1 year ago
The two I know of with the least distortion are actually primes from the 1980s. Nikon began allowing a small amount of distortion in their new prime designs circa 2010, choosing to correct it with an in-camera profile.
It's not as bad as it sounds. Getting rid of that last bit of distortion may require relatively major tradeoffs in other areas like brightness.
xeonmc|1 year ago