top | item 38609441

(no title)

ticklemyelmo | 2 years ago

With less than an inch of separation, is the sense of depth even perceptible?

discuss

order

ynniv|2 years ago

They're probably fusing both lenses with the Lidar and some other tricks to reliably compute a dense surface. That would explain their suggestion not to move the camera very much, as that would cause a large portion of the mesh to be rebuilt. A blogger exported what appears to be two side by side videos, so maybe the view really is narrow or reconstruction happens at playback. There might also be Lidar data in there that he didn't notice.

Apple bought C3 Technologies a decade ago, and they use this technique to fuse photos from low flying charters to produce the 3d view in Apple Maps.

[ Paper: https://ui.adsabs.harvard.edu/abs/2008SPIE.6946E..0DI/abstra... ]

[ Coverage: https://9to5mac.com/2011/10/29/apple-acquired-mind-blowing-3... ]

[ Similar: https://web.stanford.edu/class/ee367/Winter2021/projects/rep... ]

Reubend|2 years ago

Pure speculation: when combined with the LIDAR depth sensor, the two cameras probably don't need as much physical separation to accurately create a depth map. The bigger problem is the inpainting needed to generate hidden detail when the movie is viewed from angles that are different from the one it was actually filmed from.

jauntywundrkind|2 years ago

My understanding is that very few consumer lidar sensors work well in daylight. It's hard to send out & detect significantly meaningful pulses of light, when there's sunlight all around.

I have an Intel L515 which is pretty remarkable in that sometimes you can get some depth finding outdoors. This is just a hobby item for me, I'm not an expert, but this launched as a fairly impressively long range & capable $350 USB3 system, and it seems like the market hasn't much comparable to it. Phones certainly I'd expect to be significantly worse.

coldtea|2 years ago

>The bigger problem is the inpainting needed to generate hidden detail when the movie is viewed from angles that are different from the one it was actually filmed from.

It's for spatial video, not for holographic video. When you see a 3d movie in a cinema, it's not like you can look at it from widely different anges and go peek from the side or behind the actors or whatever...

TaylorAlexander|2 years ago

I was wondering about the use of the lidar sensor. Notably they do not say they are using it, but maybe they just wanted to keep it simple? Idk seems weird not to use lidar but also seems weird not to mention it if they are using it.

aYsY4dDQ2NrcNzA|2 years ago

3D movies have existed for decades, without adjustable viewing angles.

klausa|2 years ago

One of those lenses is a Ultra-Wide though, with a _very_ different FoV than the other one.

brookst|2 years ago

It uses a crop from the center. Not sure if that crop has same FOV as the other lens though. I’d expect so?

golergka|2 years ago

I would expect some ML magic in the image processing pipeline that would make it pop out.

abracadaniel|2 years ago

Early reviews indicate it is, as some reviewers have had access to spatial video taken from a phone, but I’m not sure if those were ideal conditions or just ad-hoc.

varjag|2 years ago

As the two cameras have very different focal lengths you get pronounced parallax effect that can be exploited in post.

ricardobeat|2 years ago

Two focal lengths at the same physical distance to the subject have exactly the same perspective (i.e. if you crop them to the same area they will look the same). There is no extra information to be had from that.

0xDEF|2 years ago

One of the two cameras is the ultra-wide camera so it gets some additional parallax and visual information than just the separation to the other camera.

ryandamm|2 years ago

That’s not how parallax works. The wider field of view of the ultra wide camera will show some of the scene that the other camera doesn’t see, but over overlapping parts of the scene the parallax is a strict function of the location of the two lenses’ entrance windows.