top | item 21747175

Computational photography from selfies to black holes

204 points| dsego | 6 years ago |vas3k.com

58 comments

order
[+] sansnomme|6 years ago|reply
On the extreme end, if you live in the countryside you can replace GPS entirely using celestial navigation/star tracking. This is commonly used for rockets and satellites but right now if you own a truck or pickup you can easily go completely off-grid by mounting a lens on the rooftop. E.g. http://nova.astrometry.net/

For implementing the system on an embedded device e.g. toy drone, raspberry pi etc., the main data structure you want is a k-d tree together with some sort of evergreen star chart (it doesn't have to be extremely evergreen, current astronomy libraries can easily predict orbits for a couple decades without significant skew/deviation unless you are are aiming for centimeter level geolocation accuracy).

For the hardware you can either use existing consumer-grade stuff followed by a ton of image processing with ML as suggested above or you can use a industrial grade tracker which easily exceeds 4 figures.

https://blog.satsearch.co/2019-11-26-star-trackers-the-cutti...

It's a pretty fun weekend project. Here are some links to get started:

https://github.com/mrhooray/kdtree-rs https://github.com/astronexus/HYG-Database/blob/master/READM...

Instead of jacking up your truck, add celestial nav to it. Nothing screams freedom and independence more than cutting dependency on state-funded satellite systems. Caveats: needs more signal processing during daytime, fallback to inertia navigation when it is cloudy.

[+] greglindahl|6 years ago|reply
Rockets and LEO satellites often use GPS these days because it's easier, but, I am impressed by this diy startracker.
[+] ipsum2|6 years ago|reply
This is a great article!

> In fact, that's how Live Photo implemented in iPhones, and HTC had it back in 2013 under a strange name Zoe.

A reference to zoetropes, which were arguably one of the first "movies". https://en.wikipedia.org/wiki/Zoetrope

> To solve the problem, Google announced a different approach to HDR in a Nexus smartphone back to 2013. It was using time stacking.

I don't think time stacking is the appropriate term to use here, as standard HDR is also doing "time stacking", in that it takes multiple photos with different exposures across a small interval of time. Maybe "Fixed exposure fusion"?

I think there's a lot more to be done in computational photography, in research and engineering. In fusion of images from multiple cameras, we're barely scratching the surface. Exciting times ahead!

[+] gdubs|6 years ago|reply
I dropped my previous iPhone while cycling home from work and lived for a while with a busted lens. It made for some interesting photos, but was mostly annoying.

As a result I started using my DSLR again, and I rediscovered how beautiful the photos were. They also print nicer.

This fall I finally replaced the busted phone with an iPhone Pro. The camera on this thing is great, but the computational photography enhancements are particularly nice.

But my DSLR still smokes it.

There’s the old maxim of “the best camera is the one you have on you.” I’m happy I went with the pro, and in isolation, it’s an amazing all around snapshot camera. Occasionally I get lucky with some stunning shot.

But the DSLR just wins hands down when it comes to shooting something like the foliage of a Japanese maple. The bokeh is beautiful straight out of the camera. The iPhone still struggles, and there’s a ton of matte noise around edges that needs to be cleaned up. It’s much better at things it was designed for, like portraits.

So, anecdotally, for me at least, it’s a mixed bag. I love both cameras for different reasons but ultimately the DSLR still has the edge on quality. But the iPhone is always there, and has some tricks (especially low light) that the DSLR can’t compete with.

For the foreseeable future, I don’t see my phone fully replacing my DSLR.

[+] numbol|6 years ago|reply
Sorry for bad english, and maybe I am deeply wrong, but:

In extreme cases, it even not "photo" as some information about photons recieved by some optic system with noise reduction afterwards. Not, it just pictures, based on recognised faces, objects and stars. And I don't know why, but I feel panifully bad about it. It is not approximation of world-how-it-is, but some expectation about world-how-people-want-it. It can recognise constellation based on few stars, and will draw nice picture of great stary sky, but will delete starlink sattelite, meteora or supernova as some unexpected noise.

[+] GuB-42|6 years ago|reply
You can think of these enhanced pictures as an artistic interpretation. Like a robot painter drawing your portrait. That's not a bad thing, good artists, and most likely good robots know how to make something look good while preserving the essentials. Unless I am doing astronomy, I'd rather have a beautiful night sky than a speck of dust that might be a starlink satellite. And if I am doing astronomy, having all the super-resolution features is really nice.

Unprocessed camera modes will continue to exist for people who want that. Maybe with some built-in digital signature in case it is used as proof.

Photography, and earlier that that, drawing, has always been world-how-people-want-it. Accuracy is just one of the things you may want.

[+] roywiggins|6 years ago|reply
> This approach pioneered by the guys, who liked to take pictures of star trails in the night sky. Even with a tripod, it was impossible to shot such pictures by opening the shutter once for two hours

Yes, but also you can't leave a digital sensor collecting for two hours, the pixels start saturating and the noise builds up- not like film, which can do truly long exposures.

[+] ryandamm|6 years ago|reply
For what it's worth, film also suffers from reciprocity failure (nonlinear response at extended exposure times with low flux).

Some things were never easy.

[+] tomxor|6 years ago|reply
This is all really cool, but, there's one thing you can't make up for enough with processing: optical zoom, (digital zoom, however much temporal super resolution trickery, has a different angle).
[+] m463|6 years ago|reply
DSLRs have exceptional lenses with exceptional but large sensors, which means the zooms are modest (up to around ~600mm)

So the really interesting long focal lenght cameras are the all-in-one superzoom cameras like the canon sx70hs and nikon p1000.

They accomplish the high magnification by using a sort-of-good lens with a sort-of-good small sensor, achieving up to 3000mm "equivalent" zoom.

Unfortunately, the "pro-sumer" design gives you an electronic viewfinder and slower less accurate focus and all kinds of other non-dslr mediocrity.

sigh.

[+] HPsquared|6 years ago|reply
What's the difference? (Other than needing a really densely-packed sensor, of course)
[+] someguyorother|6 years ago|reply
> Yes, it opens up a lot of possibilities for us today, but there is a hunch we're still trying to wave with hand-made wings instead of inventing a plane. One that will leave behind all these shutters, apertures, and Bayer filters.

> The beauty of the situation is that we can't even imagine today what it's going to be.

Optical phased arrays of nanoantennas?

[+] sandGorgon|6 years ago|reply
here's my question - why isnt there a computational photography app startup that works on these phones (either android or iphone).

Let's take android - the snapdragon 855 is a very standard flagship CPU and is on tons of phones (including the 450$ Xiaomi K20 Pro). Why isnt there a computational photography app that works on these phones ?

Why is Pixel 4 - which uses the same 855 chip - the only one that has this software. Is it patent encumbered ? or is there some massive dataset deep learning stuff involved.

I'm surprised that there isnt a startup that is building these apps out there.

[+] ryandamm|6 years ago|reply
Keep in mind this is a breezy review / mashup of things that are commercially viable, commercially presumed (already built into everything), and academic curiosity, without much distinction between the categories. Fun read, but don't extrapolate too much from this article.
[+] StingyJelly|6 years ago|reply
Issue may be that the algorithms are trained on specific camera unit and gathering a large database of data from all possible cameras may be difficult even for big guy like google. Google puts a ton of work into their camera app and you can get clones that work on other devices with varying degree off success. For example I have mi5s that uses the same sensor as first pixel and get quite decent results with arnova's gcam clone.
[+] CDSlice|6 years ago|reply
The Pixel 4 also has a custom chip just for image processing which other phones do not, so you can't just port the code over to other phones.
[+] anta40|6 years ago|reply
So someday we can expect a $1000-ish smartphone which image quality-wise can compete with medium format gears like Fuji or Hasselblad?
[+] rocqua|6 years ago|reply
I'd expect that at some point mirorless cameras will get close to feature parity at the computational photography part, and will beat smartphones handily by having the space for a good lens and slightly bigger sensors.

Cause all the tricks used on smartphones still work if you have a better lens and sensor. They will still yield these improvements. It's just gonna take a while for the manufacturers to catch up. Especially on the DSLR end, because of slow shutters and massive momentum.

[+] pabs3|6 years ago|reply
I wonder if there are any open source computational photography tools.
[+] vas3k|6 years ago|reply
Usually I don’t mind reposting my articles but with a direct reference to original at the beginning https://vas3k.com/blog/computational_photography/

Let’s be respectful to the original author, I spent couple of months writing it:(

[+] rozhok|6 years ago|reply
>The article was originally published by Vas3k in https://vas3k.com/ on 30.06.2019 Let's Enhance team is very thankful for given materials.

What a shame! They've completely rip-off your article and "thankful" for that!

[+] jacquesm|6 years ago|reply
That's pretty low. I've mailed dang.
[+] meigwilym|6 years ago|reply
This is a fantastic article, and it's a shame that you're being ripped off.
[+] Chris2048|6 years ago|reply
Someone replying to you claims they had a conversation with you, is that true?
[+] ssivark|6 years ago|reply
Is there a way to call out this kind of shitty behavior, pissing in the commons pool? Maybe flagging the article and replacing the link with the original?
[+] StavrosK|6 years ago|reply
You can issue a DMCA takedown to their host.
[+] tudorw|6 years ago|reply
Thanks for writing this, fascinating.
[+] katecatkitty|6 years ago|reply
Hi Vas3k!

My name is Kate and I am Head of Content at Let's Enhance.

We are very thankful for the given material, because it's highly related to us.

And I want to clarify this unpleasant situation:

I've directly talked with the author of the article - Vas3k and asked for the permission to publish this awesome material. I can attach the screenshots of our conversation.

All the copyrights are reserved. We mentioned you and your blog as the original source. What's more, we've saved all the links in the article that refer to your blog.

So, all the accuses are unfounded.

[+] jacquesm|6 years ago|reply
Flagging this. Paging dang for a link change and a ban on letsenhance.io.
[+] PopeDotNinja|6 years ago|reply
What's the problem with letsenhance.io? Honest question. I skimmed the article and didn't see anything off-putting.
[+] growlist|6 years ago|reply
These hand-drawn-look diagrams are starting to become a bit of a cliche.
[+] ryandamm|6 years ago|reply
This article is completely, utterly wrong as soon as it starts talking about anything plenoptic. Please disregard after that point as there are serious factual errors, particularly regarding what Google did or didn't do with Lytro.

(Source: I'm in the VR / plenoptic space, knew a bunch of people at Lytro, some of whom are now at Google. Timelines and facts do not match this article's assertions.)

[+] jacquesm|6 years ago|reply
That comment would be a lot more valuable if you corrected the record. To just gainsay what is written here accompanied by a strong reference to your authority is not how it is done imo, and if you can't or don't want to talk about it then you also shouldn't comment like this.
[+] acrefoot|6 years ago|reply
I got a chance to work with a lightfield camera for a class project a few years before Lytro shipped its first cameras, so I was sad to see it struggle as a consumer product. Can you shed more light on what the article should have written?
[+] yori|6 years ago|reply
So it is your word against the OP's. Why should we believe you more than the OP? Please present some facts and citations that we can verify ourselves. Appeal to authority is a fallacious form of argument.