top | item 21997392

Algorithm Removes Water from Underwater Images

418 points| fortran77 | 6 years ago |scientificamerican.com | reply

106 comments

order
[+] ivanech|6 years ago|reply
actual project site with more detailed description and before+after images: https://www.deryaakkaynak.com/sea-thru

paper [pdf]: http://openaccess.thecvf.com/content_CVPR_2019/papers/Akkayn...

[+] Groxx|6 years ago|reply
"Sea-thru" is such a great name for this. Also I'm always glad to see things that aren't "we threw a neural net at it and it looks pretty good"
[+] toastal|6 years ago|reply
I just wish that site didn't require JavaScript just to view these images. How is this Wix behavior acceptable?
[+] 2bitencryption|6 years ago|reply
that's fascinating, and a lot more involved than what I expected (my guess was something more like off-the-shelf color correction)
[+] ComodoHacker|6 years ago|reply
On the first example corrected image reveals some details in shadows but also burns out some highlights.
[+] andromeduck|6 years ago|reply
So it seems like it's just MVS > depth map + recovery of haze characteristics by sampling a color reference at a bunch of distances. That seems blindingly obvious unless I missed something.
[+] cetra3|6 years ago|reply
> Sea-thru is protected by a registered patent owned by Carmel Ltd the Economic Co. of the Haifa University and its subsidiary SeaErra

Looks like they are going to monetise this technology at some point given the disclaimer at the bottom of the page. This is not wrong. But it feels like a PR exercise dressed up as something academic which is a little creepy.

[+] smnrchrds|6 years ago|reply
It is not unusual in academia to patent all innovations with potential commercial applications. At least in Canada, universities typically have innovation centres whose main job is encouraging and helping professors, graduate students, and other researchers patent their innovations and commercialize them (e.g. by licensing the patent). It is not sinister, it is normal procedure in academia.
[+] sytelus|6 years ago|reply
All universities seems to be patenting the hell out of their research these days.
[+] balfirevic|6 years ago|reply
> This is not wrong.

It is if the research was paid for by the public.

[+] hijklmno|6 years ago|reply
When you have a colour chart for reference and known distances between the camera and the chart, it's not difficult to colour correct. They might have a better interpretation of the underwater imaging phenomenon, which anyone can reverse engineer and use (from the published work). Patent's are just for reputation.
[+] chairmanwow1|6 years ago|reply
I hate to say it, but that was actually a really poorly edited and produced video. It spent way too long on b-roll and did a really poor job framing the problem.

I would have strongly preferred static images in the article and an interview video buried below.

[+] ec109685|6 years ago|reply
The video let them show a 30 second ad which monetizes much better than static ads.

I agree they should have gotten to the punch line and show results rather than the doctor swimming.

[+] a_t48|6 years ago|reply
Some static before/after pictures would _really_ help this article. I get that it's intended to be consumed as a video, but comeon.
[+] erikig|6 years ago|reply
For anyone that’s interested - skip to the last 30s of the video for the best before/after examples.
[+] wereHamster|6 years ago|reply
Not a single image in the article. An article about images. What a shame.
[+] wallflower|6 years ago|reply
Scuba divers already use post-processing in Lightroom or apps like Dive+ (http://dive.plus/). It will be interesting to see if this becomes popular in that community. The results are pretty good already with Dive+.
[+] peteretep|6 years ago|reply
Looks a lot stronger than the Dive+ images I've seen (and created)
[+] blt|6 years ago|reply
This would be a great application for deep learning. Use the authors' method to generate a lot of uncorrected-corrected pairs. Or, use a graphics engine to render realistic underwater scenes with and without water color. Then use a convolutional neural network to learn to mimic the transformation. Then any photographer can apply the learned filter without a color card or depth information.

Edit: already been done: https://arxiv.org/abs/1702.07392

[+] aspaceman|6 years ago|reply
You should take a look at the paper itself which is linked above. It does not require a color card in all cases, and the only information required is the depth, which can be obtained from a series of photos instead if required.
[+] iicc|6 years ago|reply
What you want is a few lasers of known wavelengths (RGB?) pointing at known angles such that, from the camera's perspective, they appear as lines (ie they aren't perpendicular to the plane of the photo sensor).

A calibration image(s) can be made before each shot. Possibly the resulting image correction can be integrated in to the camera too.

The laser wavelengths are a substitute for the color chart. The laser angle means you get a reading at each distance (ie in the image, each point on the laser "line", corresponds to a distance.)

[+] ynniv|6 years ago|reply
Her algorithm doesn't require calibration. The color chart in many of the photos is to demonstrate the effect, or was just habit when she took them.
[+] rocqua|6 years ago|reply
That seems like a bad idea, because lasers are monochromatic.

Sure, a red laser looks red. But just because you only see 50% of the brightness of the light of a red laser doesn't mean the absorption of all red colors will be 50%. Seems a lot better to use some very mixed source of light.

[+] simonh|6 years ago|reply
laser beams are only visible if they reflect off something, so in a lot of situations the camera just won't see them except as points where they hit an object.
[+] ismepornnahi|6 years ago|reply
How is this different from the colour chart idea? If we know how some actual KNOWN RGB pixels look in a particular setup, we can apply the same filter across the image. Right?
[+] Scaevolus|6 years ago|reply
The amount of color shift depends on how much water is between the object and the camera, so you need to have a depth map to recover the true colors. You can see how it compares to naive color transforms in the paper.
[+] kolinko|6 years ago|reply
The difference is that in this case the same colour will look different depending on how far away it is.

With regular colour correction things have a slightly different colour regardless of how far away they are from the camera, so the task is way simpler.

[+] Rainymood|6 years ago|reply
What if I now get a bunch of images before & after and train a neural network on this to "learn" the mapping. Who would the neural network belong to?
[+] lilyball|6 years ago|reply
This looks really cool. She described how she takes photos that include her color chart, and I'm wondering if that's actually necessary to calibrate the process, or if that was just done for the purposes of developing it.
[+] iask|6 years ago|reply
The change in the shades of the color chart (due to lighting) is probably used as a tolerance in the algorithm.
[+] stan_rogers|6 years ago|reply
It's necessary for calibration at shooting time. It's a very similar technique to using, say, a ColorChecker Passport or a SpiderCHEKR with their associated profile-creation software for still photography, the main difference being that attenuation underwater is much more variable that lighting conditions on dry land. You need to accommodate attenuation of both the incident light (light hitting the subject has to get there through local water conditions) and light reflected from the subject at different depths at different times of the day and year, and with different "water" compositions (minerals, particulates, etc.).
[+] stevenjohns|6 years ago|reply
Coincidentally: the researcher's name, Derya, translates to "sea" or "ocean" in her native Turkish. I wonder if that's something that might have inspired her as part of this.
[+] nighthawk648|6 years ago|reply
I read this and wonder if the technique can be applied to space. Too bad we can’t take photos of a closing distance of something 50 million light years away.

I wonder if the algorithm would become better if not only did author did the swim closer but also took pictures of different path distances and angles simultaneously and map images together. Maybe it will reveal some of the editing work on hazy objects took a little more liberty and will produce more accurate images.

Great read!

[+] vijay_nair|6 years ago|reply
Perhaps a real-time version of this can be embedded into the diving goggles(AR) for murk-free dives.
[+] dmix|6 years ago|reply
Currently it’s only photoS and requires a carefully placed colour chart to sync up the colours.

I don’t think that last bit is automatable with just glasses.

[+] luxuryballs|6 years ago|reply
I don’t get how it’s not a photoshop though, it’s just a really specific photoshop. See the stuff how it would be on land... but it wouldn’t be like that on land at all. This is no more real or fake than any other filter applied to pixels.
[+] lilyball|6 years ago|reply
It's "not a photoshop" in that it's not someone manipulating the image until it looks good. It's an algorithm that (according to the video) uses precise physical modeling to achieve the correct color, regardless of aesthetic preferences.
[+] JauntyHatAngle|6 years ago|reply
Why would it matter if it is a "photoshop" or not?

We already use filters all the time on normal camera photos (e.g. Low Light ML on pixels). As long as it's correcting the colour for us to be able to assess it better, and it's accuracy is reasonably high, than it is gravy.

[+] Jaxan|6 years ago|reply
The moment you take a picture, your camera or phone already does tons of editing. Like really a lot! I assume, you wouldn’t consider that “photoshopping”. Where do you draw the line?

For me, haze removal, water removal are like white balance. To me it’s not manipulation.

[+] antris|6 years ago|reply
Did you watch the video where she demonstrated how the technology works and how she says it differs from photoshopping? She pretty clearly explains the difference.
[+] devit|6 years ago|reply
It would be like that without water, that's the whole point of the algorithm: based on physics, it computes the (best known approximation to) way the scene would look with no water being present.

That is, it's supposed to make the photo be like the one you would take if you were to lift a part of the seafloor onto a boat and photograph it.

[+] rolltiide|6 years ago|reply
Its just semantics. Its a filter that finds color information closer to the reality and ignores aesthetics.
[+] sysbin|6 years ago|reply
I agree with your view. An algorithm is manipulating the image and similar if a person used photoshop for manipulating an image. The process being more automated than a person doesn’t matter because both situations are altering the pixels for an altered outcome.
[+] tabtab|6 years ago|reply
NASA and JPL do similar things with Mars surface images to bring out details and color differences. The orange dusty sky normally washes everything in an orange tint and softens shading.
[+] DocG|6 years ago|reply
Article that is not really an article but video about images. Why? :(
[+] miguelrochefort|6 years ago|reply
Is anyone aware of something similar that works when the picture is taken outside the water? Something that could remove reflection and glare.
[+] lqet|6 years ago|reply
> It does not use neural networks and was not trained on any dataset.