We had an interesting discussion about this a few nights ago at a Photojournalism talk.
In that field, digital edits are seriously banned, to the point multiple very well known photo journalists have been fired for one little use of the clone tool [1] and other minor edits.
It's interesting to think I can throw an f/1.8 lens on my DSLR and take a very shallow depth of field photo, which is OK, even though it's not very representative of what my eyes saw. If I take the photo at f/18 then use an app like the one linked, producing extremely similar results, that's banned. Fascinating what's allowed and what's not.
I find even more interesting is the allowance of changing color photos to B/W, or that almost anything that "came straight off the camera" no matter how far it strays from what your eyes saw.
When you say "digital edits are seriously banned," I think that's overreaching. I'm a former newspaper editor, and "digital edits" in the form of adjusting levels, color correction, etc., are performed on every single photo.
What's not allowed, as you allude to, is retouching a photo.
So does introducing blur after the photo was taken count as retouching, or does it fall into the same category as color correction? It's an interesting question. On the one hand, it has the potential to obscure elements of the picture, which seems like retouching, but on the other hand, you could just as easily achieve the same effect with a DSLR and there would be no outcry.
Radiologists are the same... You take a Computed Tomography series of images, doing all kinds of back-projection (duh) and volumetric anisotropic edge-preserving smoothing, and then you apply segmentation and false-color, semi-transparent transfer functions and lighting...
And then you apply one more post-processing effect to try to highlight something, and they freak out that you're no longer showing them the raw data!
On a slightly related tangent, every time I see someone post a "#nofilter" photo on facebook I have to bite my tongue to stop from asking them how their camera managed to separate the RGB channels without a Bayer (or similar, eg Foveon) filter.
I mean, I sort of get what they are saying and I hate the overuse of Instagram filters probably much more than the next guy, but the odd relevance people place towards getting an image "directly out of the camera" is bizarre considering the incredible number of decisions (sometimes correctly, often not) the average digital camera has made for you in getting the measured light into a jpeg.
Personally I much prefer to shoot in RAW and then post-process because cameras, as amazing as they are in some ways, are still incredibly dumb when it comes to context and intent and I'm going to do a better job at getting the white balance, dynamic range, contrast, etc right (for how the shot was intended) than the camera is.
On the one hand, you have dodging and burning, which were often used in actual darkrooms and are still used by respected photojournalists to increase the impact of their photos. [0]
Where clumsy and obvious use of the clone tool damaged the reputation of an entire news organization.
The AP standards and practices strikes an interesting balance [1]:
AP pictures must always tell the truth. We do not alter or digitally
manipulate the content of a photograph in any way.
The content of a photograph must not be altered in Photoshop or by
any other means. No element should be digitally added to or subtracted
from any photograph. The faces or identities of individuals must not
be obscured by Photoshop or any other editing tool. Only retouching
or the use of the cloning tool to eliminate dust on camera sensors
and scratches on scanned negatives or scanned prints are acceptable.
Minor adjustments in Photoshop are acceptable. These include cropping,
dodging and burning, conversion into grayscale, and normal toning and
color adjustments that should be limited to those minimally necessary
for clear and accurate reproduction (analogous to the burning and
dodging previously used in darkroom processing of images) and that
restore the authentic nature of the photograph. Changes in density,
contrast, color and saturation levels that substantially alter the
original scene are not acceptable. Backgrounds should not be digitally
blurred or eliminated by burning down or by aggressive toning. The
removal of “red eye” from photographs is not permissible.
> It's interesting to think I can throw an f/1.8 lens on my DSLR and take a very shallow depth of field photo, which is OK, even though it's not very representative of what my eyes saw
I beg to differ. Pictures with a shallow depth of field feel more real because that is how the eyes work naturally. Hold up your hand at full arms length, and focus on it with your eyes. Everything else around it is blurred.
The data that the photo conveys should not be edited (i.e. the people in it, the objects in it, the framing shouldn't be used to intentially remove relevant data to the subject etc) but the mood or style of the photo may be edited. Colors, contrast, some stylistic effects, lighting, depth of field etc.
Its fairly obvious whats over the line and what isn't in 99% of cases with this.
Photojournalism is a weird cross over of history/evidence collection and drifting in to art. They fit together like oil and water, for all the reasons you pointed out.
Could you speak to how familiar photojournalists are with the math and transformations that take place inside a digital camera?
For example, almost certainly your cameras have dead pixels, which are processed away during the demosaicing stage, but showing them would be a "straight off the camera image" that I doubt any photojournalist would desire.
Additionally many scene-wide process steps (like lens shading map estimation) can be changed in post-processing if the cameras automatic algorithms (3a, etc) "decided" wrong.
They should make an exception for photos appearing online, since you can always link to the original one. Maybe a stipulation that the edited photos can't be used in print, unless accompanied by the original.
Regarding the technology (achieving shallow depth of field through an algorithm), not Google's specific implementation ...
Up until now, a decently shallow depth of field was pretty much only achievable in DSLR cameras (and compacts with sufficiently large sensor sizes, which typically cost as much as a DSLR). You can simulate it in Photoshop, but generally it takes a lot of work and the results aren't great. The "shallow depth of field" effect was one of the primary reasons why I bought a DSLR. (Yeah, yeah, yeah, quality of the lens and sensor are important too.) Being able to achieve a passable blur effect, even if it's imperfect, on a cellphone camera is really pretty awesome, considering the convenience factor. And if you wanted to be able to change the focus after you take the picture, you had to get a Lytro light field camera -- again, as expensive as a DSLR, but with a more limited feature set.
Regarding Google's specific implementation ...
I've got a Samsung Galaxy S4 Zoom, which hasn't yet gotten the Android 4.4 update, so I can't use the app itself to evaluate the Lens Blur feature, but based on the examples in the blog post, it's pretty good. It's clearly not indistinguishable from optical shallow depth of field, but it's not so bad that it's glaring. That you can adjust the focus after you shoot is icing on the cake, but tremendously delicious icing. The S4 Zoom is a really terrific point-and-shoot that happens to have a phone, so I'm excited to try it out. Even if I can use it in just 50% of the cases where I now lean on my DLSR, it'll save me from having to lug a bulky camera around AND be easier to share over wifi/data.
I believe the algorithm could be improved by applying the blur to certain areas/depths of the image without including pixels from very distant depths, and instead blurring/feathering edges with an alpha channel over those distant (large depth separation) pixels.
For example, if you look at the left example photo by Rachel Been[1], the hair is blurred together with the distant tree details. If instead the algorithm detected the large depth separation there and applied the foreground blur edge against an alpha mask, I believe the results would look a lot more natural.
Occlusion boundaries are always a challenge for vision algorithms. The defects you see are probably due to incorrect depth estimates in these regions. My guess is that for efficiency on a mobile device, they are using a simple window to aggregate the SAD measurements rather than a more complex term which weights errors based on color similarity. The simpler method will not perform well at these boundaries. Getting correct depth and foreground/background separation is a chicken and egg problem, though there are papers which aim for a joint optimization.
Getting a good alpha channel around objects is pretty hard live and with high megapixel counts. If you want to do this in-camera or within half a second, it's going to be sloppy for now. Essentially you have to check out a bunch of different 'magic wand' settings and compare the results, and there's no validation of those results until the user sees them. But yeah, it should definitely be possible to scoot the blur point "backwards" a bit.
I'd love to see some test images with that (just get a side-by-side with the app and a larger aperture camera).
As I understand what you're proposing, I'm not sure it would actually be closer to what a large-aperture camera would capture. The light field from the farther depth field should be convolving with the light field from the near depth field.
Still, side-by-side would be the best way to view these :) I'll do it later this weekend if I get the chance.
Is the app taking more than one photo? It wasn't clear in the blog post.
AFAIU to have any depth perception you need to take more than one photo. Calculate the pupil distance (the distance the phone moved) then match image features between the two or more images. Calculate the amount of movement between the matching features to then calculate the depth.
As described you then map the depth into an alpha transparency and then apply the blurred image with various blur strength over the original image.
Since you're able to apply the blur after the image, it would mean the google camera always takes more than one photo.
Also a Cool feature would be to animate the transition from no blur to DOF blur as a short clip or use the depth perception to apply different effect than just blur, like selective coloring, or other filters.
I sure wish you could buy a DSLR that just plugs into your iPhone. I don't want any of that terrible DSLR software -- just the hardware.
I think many devices should become BYOD (bring your own device) soon, including big things like cars.
edit: I don't just want my pictures to be saved on my phone. I'd like the phone to have full control of the camera's features -- so I can use apps (like timelapse, hdr, etc.) directly within the camera.
None of these solutions are perfect, but just fyi:
1) Samsung has released the Galaxy NX: a mirrorless interchangeable lens camera, which has a DSLR-sized APS-C sensor, but no actual mirror. The back is essentially just a big Android phone.
2) Sony sells the QX-10 and QX-100, which are just the lens, and you wirelessly connect them to your phone. The QX-100 has the same 1" sensor as the best pocket camera you can currently buy (the RX100).
3) Both Sony and Canon make wireless cameras (Canon 6D [full-frame], Sony RX100 MII [1" sensor] or A7/A7R [full-frame], maybe others), that let you connect to a phone or tablet and view a live feed of what the camera sees, change the aperture or other settings, trigger the shutter, and receive photos on your device. I'm unclear how open the Canon API is, but Sony has their own ecosystem of interesting apps (http://playmemoriesonline.com/) that let you do things like set-up time lapses. The Sony UI is pretty clunkly though, I would greatly prefer an open API.
I'm interested why you'd want DSLR, though, because if it attaches to my phone i'd probably be happy to use the phone screen as the viewfinder and save the depth and weight that would otherwise go to a moving mirror assembly.
I recently got a Canon 6D. It has built-in wireless which allows you to connect directly to your smartphone so you can download photos from the camera to your phone. It works really well. You can also use your phone as a remote control to take photos and the view finder can be seen on your phone. I love that camera.
I did a quick comparison of a full frame slr vs moto x with this lens blur effect. I tried to match the blur amount, but made no other adjustments. Work really well compared to everything else I have seen!
http://onionpants.s3.amazonaws.com/IMG_0455.jpg
It doesn't give me a headache, but to my eyes it isn't remotely convincing. There's no way I'd look at any of those example pictures and not realize the blurring was done with postprocessing.
I agree that it irks me. But I find that irritation hard to justify: Shallow depth of field is artificial in all cases. Your vision doesn't really work that way. Even saying "Your eye is like a camera" is only partly true. You can't get at the unprocessed image. What you think you see isn't the image on your retina.
So saying that the effect as a result of a wide-open aperture is more truthful than algorithmically blurring the background of a photo seems odd. Both are a photographic artifice that approximates what you think you see when your attention is on one object in your field of view.
The same is true for the effects of focal length. A longer lens approximates, but can never actually reproduce, the effect of the brain trying to make same-sized things look the same size. A shorter focal length does the opposite, and puts more emphasis on foreground objects.
It doesn't look that realistic, but it's way better than the awful tilt-shift effect (or whatever it's called) on instagram. Which, as far as I can tell from my facebook feed, is used 99% of the time as a poor man's bokeh.
Any form of blurriness can have this effect - I presume your eyes are interpreting the blurriness as something not being in focus, and are repeatedly trying to focus, causing strain. (I first observed this effect when watching someone playing World of Warcraft, when their character got drunk it made the whole screen blurry and gave me a headache)
Yes, it's definitely causing me discomfort to look at them. I see the blurred area, and try and fail to focus on it. 3D effects really don't translate well to a 2D display. I get what they're going for, but I'm personally not a fan of the effect. I'd rather have the nuanced little details in the background, in case one day I want to see that too. The effect is lovely for extreme close-up pictures (of insects, flowers, water drops, ...), though.
I like the idea of storing the depth information (and preferred focal point) inside the image, and allowing the viewer to decide whether they want depth of focus effects, and if so, how strongly they want them enabled.
Doesn't look totally convincing, but it's good for a first version.
The real problem with things like this is the effect became cool by virtue of the fact it needed dedicated equipment. Take that away and the desire people will have to apply the effect will be greatly diminished.
Wow.. this is missing the entire point on why lens blur occurs. Lens blur in normal photographs is the price you pay because you want to focus sharply on a subject. The reason photos with blur looks "cool" is not because the blur itself but its because the subject is so sharply focused that its details are order of magnitude better. If you take a random photo, calculate depth map somehow, blur our everything but the subject then you are taking away information from the photo without adding information to the subject. The photos would look "odd" to the trained eyes at best. For casual photograph, it may look slightly cool on small screens like phone because of relatively increased perceived focus on subject but it's fooling eyes of casual person. If they want to really do it (i.e. add more details to subject) then they should use multiple frames to increase resolution of the photograph. There is a lot of research being done on that. Subtracting details from background without adding details to subject is like doing an Instagram. It may be cool to teens but professional photographers know it's a bad taste.
These algorithms can allow you to have a self-driving car with only cameras. But, there would be a lot of problems if you tried to make a camera-only system for consumer vehicle navigation. Vision systems need distinct "features" in images to find and track across frames to allow you to compute distance, speed, etc. If you don't have many features, pure vision approaches won't work. Nighttime operation is a big problem, as is driving on relatively smooth, featureless terrain.
The basic downside is that standard consumer cameras are passive devices. That's why Google uses LIDAR- it's an "active" technology that creates its own features. And driving is an application where the usual computer vision "it works most of the time" is just not good enough. Time of flight cameras are interesting sensors that combines active with passive technology. As this technology matures it might allow for self-driving cars without LIDAR.
Yes. Machine vision has advanced a lot in recent years and it might just be possible. There is at least one startup trying to make self-driving cars with just machine vision.
Computing the differences between several cameras can be a judge of distance, but you can also see how much the object moves as the car moves, and get an estimate based on normal machine vision (how big objects like that normally are, objects nearby it, where it's shadow is, etc.)
In the future maybe. Computing the depth maps in this simple case is not very costly because it only requires a relatively sparse amount of key points. To recreate the full scene geometry around a car while moving at 100kmph is a lot harder. Not only do you need to use a massive amount of key points but you need to produce a depth map in millisecond time frames.
A couple of other really cool depth-map implementations:
1) The Seene app (iOS app store, free), which creates a depth map and a pseudo-3d model of an environment from a "sweep" of images similar to the image acquisition in the article
2) Google Maps Photo Tours feature (available in areas where lots of touristy photos are taken). This does basically the same as the above but using crowdsourced images from the public.
IMO the latter is the most impressive depth-mapping feat I've seen: the source images are amateur photography from the general public, so they are randomly oriented (and without any gyroscope orientation data!), and uncalibrated for things like exposure, white balance, etc. Seems pretty amazing that Google have managed to make depth maps from that image set.
I find it funny that this was one of the "exclusive features" of the HTC One M8 thanks to the double camera, and days after it's release Google is giving the same ability to every Android phones.
I'm sure the HTC implementation works better, but this is still impressive.
The interesting part is not that it can blur a part of the image. The interesting part is that it can generate a depth map automatically from a series of images taken from different points of view, using techniques used in photogrammetry.
This app is now on the Play Store and works with most phones and tablets running Android 4.4 KitKat. Unfortunately it seems to crash on my S3 running CM 11, but your experience may vary.
[+] [-] grecy|12 years ago|reply
In that field, digital edits are seriously banned, to the point multiple very well known photo journalists have been fired for one little use of the clone tool [1] and other minor edits.
It's interesting to think I can throw an f/1.8 lens on my DSLR and take a very shallow depth of field photo, which is OK, even though it's not very representative of what my eyes saw. If I take the photo at f/18 then use an app like the one linked, producing extremely similar results, that's banned. Fascinating what's allowed and what's not.
I find even more interesting is the allowance of changing color photos to B/W, or that almost anything that "came straight off the camera" no matter how far it strays from what your eyes saw.
[1] http://www.toledoblade.com/frontpage/2007/04/15/A-basic-rule...
[+] [-] jawns|12 years ago|reply
What's not allowed, as you allude to, is retouching a photo.
So does introducing blur after the photo was taken count as retouching, or does it fall into the same category as color correction? It's an interesting question. On the one hand, it has the potential to obscure elements of the picture, which seems like retouching, but on the other hand, you could just as easily achieve the same effect with a DSLR and there would be no outcry.
[+] [-] VikingCoder|12 years ago|reply
And then you apply one more post-processing effect to try to highlight something, and they freak out that you're no longer showing them the raw data!
That does not mean what you think it does.
[+] [-] georgemcbay|12 years ago|reply
I mean, I sort of get what they are saying and I hate the overuse of Instagram filters probably much more than the next guy, but the odd relevance people place towards getting an image "directly out of the camera" is bizarre considering the incredible number of decisions (sometimes correctly, often not) the average digital camera has made for you in getting the measured light into a jpeg.
Personally I much prefer to shoot in RAW and then post-process because cameras, as amazing as they are in some ways, are still incredibly dumb when it comes to context and intent and I'm going to do a better job at getting the white balance, dynamic range, contrast, etc right (for how the shot was intended) than the camera is.
[+] [-] wooster|12 years ago|reply
On the one hand, you have dodging and burning, which were often used in actual darkrooms and are still used by respected photojournalists to increase the impact of their photos. [0]
Then you have things like this: http://en.wikipedia.org/wiki/Adnan_Hajj_photographs_controve...
Where clumsy and obvious use of the clone tool damaged the reputation of an entire news organization.
The AP standards and practices strikes an interesting balance [1]:
[0] http://www.poynter.org/uncategorized/14840/a-photojournalist...[1] http://www.ap.org/company/news-values
[+] [-] reustle|12 years ago|reply
I beg to differ. Pictures with a shallow depth of field feel more real because that is how the eyes work naturally. Hold up your hand at full arms length, and focus on it with your eyes. Everything else around it is blurred.
[+] [-] tsunamifury|12 years ago|reply
The data that the photo conveys should not be edited (i.e. the people in it, the objects in it, the framing shouldn't be used to intentially remove relevant data to the subject etc) but the mood or style of the photo may be edited. Colors, contrast, some stylistic effects, lighting, depth of field etc.
Its fairly obvious whats over the line and what isn't in 99% of cases with this.
[+] [-] headShrinker|12 years ago|reply
[+] [-] aray|12 years ago|reply
For example, almost certainly your cameras have dead pixels, which are processed away during the demosaicing stage, but showing them would be a "straight off the camera image" that I doubt any photojournalist would desire.
Additionally many scene-wide process steps (like lens shading map estimation) can be changed in post-processing if the cameras automatic algorithms (3a, etc) "decided" wrong.
[+] [-] vellum|12 years ago|reply
[+] [-] jawns|12 years ago|reply
Up until now, a decently shallow depth of field was pretty much only achievable in DSLR cameras (and compacts with sufficiently large sensor sizes, which typically cost as much as a DSLR). You can simulate it in Photoshop, but generally it takes a lot of work and the results aren't great. The "shallow depth of field" effect was one of the primary reasons why I bought a DSLR. (Yeah, yeah, yeah, quality of the lens and sensor are important too.) Being able to achieve a passable blur effect, even if it's imperfect, on a cellphone camera is really pretty awesome, considering the convenience factor. And if you wanted to be able to change the focus after you take the picture, you had to get a Lytro light field camera -- again, as expensive as a DSLR, but with a more limited feature set.
Regarding Google's specific implementation ...
I've got a Samsung Galaxy S4 Zoom, which hasn't yet gotten the Android 4.4 update, so I can't use the app itself to evaluate the Lens Blur feature, but based on the examples in the blog post, it's pretty good. It's clearly not indistinguishable from optical shallow depth of field, but it's not so bad that it's glaring. That you can adjust the focus after you shoot is icing on the cake, but tremendously delicious icing. The S4 Zoom is a really terrific point-and-shoot that happens to have a phone, so I'm excited to try it out. Even if I can use it in just 50% of the cases where I now lean on my DLSR, it'll save me from having to lug a bulky camera around AND be easier to share over wifi/data.
[+] [-] DangerousPie|12 years ago|reply
https://refocus.nokia.com/
edit - better link: http://www.engadget.com/2014/03/14/nokia-refocus-camera-app-...
[+] [-] dperfect|12 years ago|reply
For example, if you look at the left example photo by Rachel Been[1], the hair is blurred together with the distant tree details. If instead the algorithm detected the large depth separation there and applied the foreground blur edge against an alpha mask, I believe the results would look a lot more natural.
[1] http://4.bp.blogspot.com/-bZJNDZGLS_U/U03bQE2VzKI/AAAAAAAAAR...
[+] [-] SandB0x|12 years ago|reply
[+] [-] devindotcom|12 years ago|reply
[+] [-] aray|12 years ago|reply
As I understand what you're proposing, I'm not sure it would actually be closer to what a large-aperture camera would capture. The light field from the farther depth field should be convolving with the light field from the near depth field.
Still, side-by-side would be the best way to view these :) I'll do it later this weekend if I get the chance.
[+] [-] darkmighty|12 years ago|reply
[+] [-] gomox|12 years ago|reply
[+] [-] salimmadjd|12 years ago|reply
As described you then map the depth into an alpha transparency and then apply the blurred image with various blur strength over the original image.
Since you're able to apply the blur after the image, it would mean the google camera always takes more than one photo.
Also a Cool feature would be to animate the transition from no blur to DOF blur as a short clip or use the depth perception to apply different effect than just blur, like selective coloring, or other filters.
[+] [-] slaven|12 years ago|reply
[+] [-] mikeleung|12 years ago|reply
[+] [-] rasz_pl|12 years ago|reply
no you dont, its very tricky, but doable with one:
http://www.cs.cornell.edu/~asaxena/learningdepth/
http://www.cs.cornell.edu/~asaxena/reconstruction3d/
[+] [-] nostromo|12 years ago|reply
I think many devices should become BYOD (bring your own device) soon, including big things like cars.
edit: I don't just want my pictures to be saved on my phone. I'd like the phone to have full control of the camera's features -- so I can use apps (like timelapse, hdr, etc.) directly within the camera.
[+] [-] berberous|12 years ago|reply
1) Samsung has released the Galaxy NX: a mirrorless interchangeable lens camera, which has a DSLR-sized APS-C sensor, but no actual mirror. The back is essentially just a big Android phone.
2) Sony sells the QX-10 and QX-100, which are just the lens, and you wirelessly connect them to your phone. The QX-100 has the same 1" sensor as the best pocket camera you can currently buy (the RX100).
3) Both Sony and Canon make wireless cameras (Canon 6D [full-frame], Sony RX100 MII [1" sensor] or A7/A7R [full-frame], maybe others), that let you connect to a phone or tablet and view a live feed of what the camera sees, change the aperture or other settings, trigger the shutter, and receive photos on your device. I'm unclear how open the Canon API is, but Sony has their own ecosystem of interesting apps (http://playmemoriesonline.com/) that let you do things like set-up time lapses. The Sony UI is pretty clunkly though, I would greatly prefer an open API.
[+] [-] aray|12 years ago|reply
I'm interested why you'd want DSLR, though, because if it attaches to my phone i'd probably be happy to use the phone screen as the viewfinder and save the depth and weight that would otherwise go to a moving mirror assembly.
[+] [-] insickness|12 years ago|reply
[+] [-] DocG|12 years ago|reply
Although, I use it on android. DSLR Controller(BETA) with OTG supported Android phone and almost all of the digital DSLR cameras.
I use it with my 600D to take timelapse (it does not have it built in).
You can use it with tablet to have bigger screen for example. And it supports almost every setting.
EDIT: Free "does it work with your device" version: https://play.google.com/store/apps/details?id=us.zig.dslr
Full version: https://play.google.com/store/apps/details?id=eu.chainfire.d...
[+] [-] kbrower|12 years ago|reply
[+] [-] themgt|12 years ago|reply
[+] [-] zippergz|12 years ago|reply
[+] [-] Zigurd|12 years ago|reply
So saying that the effect as a result of a wide-open aperture is more truthful than algorithmically blurring the background of a photo seems odd. Both are a photographic artifice that approximates what you think you see when your attention is on one object in your field of view.
The same is true for the effects of focal length. A longer lens approximates, but can never actually reproduce, the effect of the brain trying to make same-sized things look the same size. A shorter focal length does the opposite, and puts more emphasis on foreground objects.
[+] [-] dimatura|12 years ago|reply
[+] [-] ZoFreX|12 years ago|reply
[+] [-] byuu|12 years ago|reply
I like the idea of storing the depth information (and preferred focal point) inside the image, and allowing the viewer to decide whether they want depth of focus effects, and if so, how strongly they want them enabled.
[+] [-] jnevelson|12 years ago|reply
[+] [-] fidotron|12 years ago|reply
The real problem with things like this is the effect became cool by virtue of the fact it needed dedicated equipment. Take that away and the desire people will have to apply the effect will be greatly diminished.
[+] [-] sytelus|12 years ago|reply
[+] [-] nileshtrivedi|12 years ago|reply
Currently, the cost of LIDARs are prohibitive to make (or even experiment with) a DIY self-driving car.
[+] [-] rjdagost|12 years ago|reply
The basic downside is that standard consumer cameras are passive devices. That's why Google uses LIDAR- it's an "active" technology that creates its own features. And driving is an application where the usual computer vision "it works most of the time" is just not good enough. Time of flight cameras are interesting sensors that combines active with passive technology. As this technology matures it might allow for self-driving cars without LIDAR.
[+] [-] Houshalter|12 years ago|reply
Computing the differences between several cameras can be a judge of distance, but you can also see how much the object moves as the car moves, and get an estimate based on normal machine vision (how big objects like that normally are, objects nearby it, where it's shadow is, etc.)
[+] [-] sjtrny|12 years ago|reply
[+] [-] scep12|12 years ago|reply
[+] [-] angusb|12 years ago|reply
1) The Seene app (iOS app store, free), which creates a depth map and a pseudo-3d model of an environment from a "sweep" of images similar to the image acquisition in the article
2) Google Maps Photo Tours feature (available in areas where lots of touristy photos are taken). This does basically the same as the above but using crowdsourced images from the public.
IMO the latter is the most impressive depth-mapping feat I've seen: the source images are amateur photography from the general public, so they are randomly oriented (and without any gyroscope orientation data!), and uncalibrated for things like exposure, white balance, etc. Seems pretty amazing that Google have managed to make depth maps from that image set.
[+] [-] gamesurgeon|12 years ago|reply
[+] [-] Spittie|12 years ago|reply
I'm sure the HTC implementation works better, but this is still impressive.
[+] [-] mauricesvay|12 years ago|reply
[+] [-] bckrasnow|12 years ago|reply
[+] [-] jestinjoy1|12 years ago|reply
[+] [-] r00fus|12 years ago|reply
[+] [-] Lutin|12 years ago|reply
https://play.google.com/store/apps/details?id=com.google.and...