So it seems like it's just MVS > depth map + recovery of haze characteristics by sampling a color reference at a bunch of distances. That seems blindingly obvious unless I missed something.
> Sea-thru is protected by a registered patent owned by Carmel Ltd the Economic Co. of the Haifa University and its subsidiary SeaErra
Looks like they are going to monetise this technology at some point given the disclaimer at the bottom of the page. This is not wrong. But it feels like a PR exercise dressed up as something academic which is a little creepy.
It is not unusual in academia to patent all innovations with potential commercial applications. At least in Canada, universities typically have innovation centres whose main job is encouraging and helping professors, graduate students, and other researchers patent their innovations and commercialize them (e.g. by licensing the patent). It is not sinister, it is normal procedure in academia.
When you have a colour chart for reference and known distances between the camera and the chart, it's not difficult to colour correct. They might have a better interpretation of the underwater imaging phenomenon, which anyone can reverse engineer and use (from the published work). Patent's are just for reputation.
I hate to say it, but that was actually a really poorly edited and produced video. It spent way too long on b-roll and did a really poor job framing the problem.
I would have strongly preferred static images in the article and an interview video buried below.
Scuba divers already use post-processing in Lightroom or apps like Dive+ (http://dive.plus/). It will be interesting to see if this becomes popular in that community. The results are pretty good already with Dive+.
This would be a great application for deep learning. Use the authors' method to generate a lot of uncorrected-corrected pairs. Or, use a graphics engine to render realistic underwater scenes with and without water color. Then use a convolutional neural network to learn to mimic the transformation. Then any photographer can apply the learned filter without a color card or depth information.
You should take a look at the paper itself which is linked above. It does not require a color card in all cases, and the only information required is the depth, which can be obtained from a series of photos instead if required.
What you want is a few lasers of known wavelengths (RGB?) pointing at known angles such that, from the camera's perspective, they appear as lines (ie they aren't perpendicular to the plane of the photo sensor).
A calibration image(s) can be made before each shot. Possibly the resulting image correction can be integrated in to the camera too.
The laser wavelengths are a substitute for the color chart. The laser angle means you get a reading at each distance (ie in the image, each point on the laser "line", corresponds to a distance.)
That seems like a bad idea, because lasers are monochromatic.
Sure, a red laser looks red. But just because you only see 50% of the brightness of the light of a red laser doesn't mean the absorption of all red colors will be 50%. Seems a lot better to use some very mixed source of light.
laser beams are only visible if they reflect off something, so in a lot of situations the camera just won't see them except as points where they hit an object.
How is this different from the colour chart idea? If we know how some actual KNOWN RGB pixels look in a particular setup, we can apply the same filter across the image. Right?
The amount of color shift depends on how much water is between the object and the camera, so you need to have a depth map to recover the true colors. You can see how it compares to naive color transforms in the paper.
The difference is that in this case the same colour will look different depending on how far away it is.
With regular colour correction things have a slightly different colour regardless of how far away they are from the camera, so the task is way simpler.
This looks really cool. She described how she takes photos that include her color chart, and I'm wondering if that's actually necessary to calibrate the process, or if that was just done for the purposes of developing it.
The researcher that authored the paper answered a few questions on Reddit last year [0]. She explained that the color chart isn’t necessary for every photo.
It's necessary for calibration at shooting time. It's a very similar technique to using, say, a ColorChecker Passport or a SpiderCHEKR with their associated profile-creation software for still photography, the main difference being that attenuation underwater is much more variable that lighting conditions on dry land. You need to accommodate attenuation of both the incident light (light hitting the subject has to get there through local water conditions) and light reflected from the subject at different depths at different times of the day and year, and with different "water" compositions (minerals, particulates, etc.).
Coincidentally: the researcher's name, Derya, translates to "sea" or "ocean" in her native Turkish. I wonder if that's something that might have inspired her as part of this.
I read this and wonder if the technique can be applied to space. Too bad we can’t take photos of a closing distance of something 50 million light years away.
I wonder if the algorithm would become better if not only did author did the swim closer but also took pictures of different path distances and angles simultaneously and map images together. Maybe it will reveal some of the editing work on hazy objects took a little more liberty and will produce more accurate images.
I don’t get how it’s not a photoshop though, it’s just a really specific photoshop. See the stuff how it would be on land... but it wouldn’t be like that on land at all. This is no more real or fake than any other filter applied to pixels.
It's "not a photoshop" in that it's not someone manipulating the image until it looks good. It's an algorithm that (according to the video) uses precise physical modeling to achieve the correct color, regardless of aesthetic preferences.
Why would it matter if it is a "photoshop" or not?
We already use filters all the time on normal camera photos (e.g. Low Light ML on pixels). As long as it's correcting the colour for us to be able to assess it better, and it's accuracy is reasonably high, than it is gravy.
The moment you take a picture, your camera or phone already does tons of editing. Like really a lot! I assume, you wouldn’t consider that “photoshopping”. Where do you draw the line?
For me, haze removal, water removal are like white balance. To me it’s not manipulation.
Did you watch the video where she demonstrated how the technology works and how she says it differs from photoshopping? She pretty clearly explains the difference.
It would be like that without water, that's the whole point of the algorithm: based on physics, it computes the (best known approximation to) way the scene would look with no water being present.
That is, it's supposed to make the photo be like the one you would take if you were to lift a part of the seafloor onto a boat and photograph it.
I agree with your view. An algorithm is manipulating the image and similar if a person used photoshop for manipulating an image. The process being more automated than a person doesn’t matter because both situations are altering the pixels for an altered outcome.
NASA and JPL do similar things with Mars surface images to bring out details and color differences. The orange dusty sky normally washes everything in an orange tint and softens shading.
[+] [-] ivanech|6 years ago|reply
paper [pdf]: http://openaccess.thecvf.com/content_CVPR_2019/papers/Akkayn...
[+] [-] Groxx|6 years ago|reply
[+] [-] toastal|6 years ago|reply
[+] [-] 2bitencryption|6 years ago|reply
[+] [-] ComodoHacker|6 years ago|reply
[+] [-] andromeduck|6 years ago|reply
[+] [-] cetra3|6 years ago|reply
Looks like they are going to monetise this technology at some point given the disclaimer at the bottom of the page. This is not wrong. But it feels like a PR exercise dressed up as something academic which is a little creepy.
[+] [-] smnrchrds|6 years ago|reply
[+] [-] sytelus|6 years ago|reply
[+] [-] balfirevic|6 years ago|reply
It is if the research was paid for by the public.
[+] [-] hijklmno|6 years ago|reply
[+] [-] chairmanwow1|6 years ago|reply
I would have strongly preferred static images in the article and an interview video buried below.
[+] [-] ec109685|6 years ago|reply
I agree they should have gotten to the punch line and show results rather than the doctor swimming.
[+] [-] Redoubts|6 years ago|reply
[+] [-] a_t48|6 years ago|reply
[+] [-] erikig|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] wereHamster|6 years ago|reply
[+] [-] wallflower|6 years ago|reply
[+] [-] peteretep|6 years ago|reply
[+] [-] blt|6 years ago|reply
Edit: already been done: https://arxiv.org/abs/1702.07392
[+] [-] aspaceman|6 years ago|reply
[+] [-] iicc|6 years ago|reply
A calibration image(s) can be made before each shot. Possibly the resulting image correction can be integrated in to the camera too.
The laser wavelengths are a substitute for the color chart. The laser angle means you get a reading at each distance (ie in the image, each point on the laser "line", corresponds to a distance.)
[+] [-] ynniv|6 years ago|reply
[+] [-] rocqua|6 years ago|reply
Sure, a red laser looks red. But just because you only see 50% of the brightness of the light of a red laser doesn't mean the absorption of all red colors will be 50%. Seems a lot better to use some very mixed source of light.
[+] [-] simonh|6 years ago|reply
[+] [-] ismepornnahi|6 years ago|reply
[+] [-] Scaevolus|6 years ago|reply
[+] [-] kolinko|6 years ago|reply
With regular colour correction things have a slightly different colour regardless of how far away they are from the camera, so the task is way simpler.
[+] [-] Rainymood|6 years ago|reply
[+] [-] lilyball|6 years ago|reply
[+] [-] zackangelo|6 years ago|reply
[0] https://www.reddit.com/r/videos/comments/dvts2j/this_researc...
[+] [-] iask|6 years ago|reply
[+] [-] stan_rogers|6 years ago|reply
[+] [-] stevenjohns|6 years ago|reply
[+] [-] nighthawk648|6 years ago|reply
I wonder if the algorithm would become better if not only did author did the swim closer but also took pictures of different path distances and angles simultaneously and map images together. Maybe it will reveal some of the editing work on hazy objects took a little more liberty and will produce more accurate images.
Great read!
[+] [-] vijay_nair|6 years ago|reply
[+] [-] dmix|6 years ago|reply
I don’t think that last bit is automatable with just glasses.
[+] [-] luxuryballs|6 years ago|reply
[+] [-] lilyball|6 years ago|reply
[+] [-] JauntyHatAngle|6 years ago|reply
We already use filters all the time on normal camera photos (e.g. Low Light ML on pixels). As long as it's correcting the colour for us to be able to assess it better, and it's accuracy is reasonably high, than it is gravy.
[+] [-] Jaxan|6 years ago|reply
For me, haze removal, water removal are like white balance. To me it’s not manipulation.
[+] [-] antris|6 years ago|reply
[+] [-] devit|6 years ago|reply
That is, it's supposed to make the photo be like the one you would take if you were to lift a part of the seafloor onto a boat and photograph it.
[+] [-] rolltiide|6 years ago|reply
[+] [-] sysbin|6 years ago|reply
[+] [-] tabtab|6 years ago|reply
[+] [-] DocG|6 years ago|reply
[+] [-] miguelrochefort|6 years ago|reply
[+] [-] lqet|6 years ago|reply