top | item 27200761

(no title)

wesleyy | 4 years ago

No, that's what the lightfield does. You see different physical images depending on your angle to the screen

discuss

order

defaultname|4 years ago

Fascinating. So not only is it feeding it an 8K / 30 (60?) FPS image, it's feeding it numerous incident angle variations and displaying all of them simultaneously?

Sounds like a monster data rate.

dialogbox|4 years ago

I think that is where the custom compression algorithm comes in. If you think the fact that human body and face doesn't change much, and the fact that it's a 3d model based, the compression ratio could be very high.

zaptrem|4 years ago

I only know what I saw from the IO stream, but I think it might send a compressed 3D mesh + texture across the network and render the light field locally.

dkarras|4 years ago

I think what they are transferring is not a video but 3d model and the skin texture applied on the model (all derived from the realtime video / depth recording on the other side). The receiving and then renders it as a 3d model on the screen.

eps|4 years ago

Sounds like eye tracking could still be useful to not bother with images for angles that are 100% not visible at the moment.