Edit: As I'm digging, this seems to be focused on stereoscopic video as opposed to actual point clouds. It appears applications like cinematic mode use a monocular depth map, and their lidar outputs raw point cloud data.
Recording pointclouds over time i guess i mean. I’m not going to pretend to understand video compression, but could it be possible to do the following movement aspect in 3d the same as 2d?
secretsatan|1 month ago
darhodester|1 month ago
Check this project, for example: https://zju3dv.github.io/freetimegs/
Unfortunately, these formats are currently closed behind cloud processing so adoption is a rather low.
Before Gaussian splatting, textured mesh caches would be used for volumetric video (e.g. Alembic geometry).
itishappy|1 month ago
https://developer.apple.com/documentation/spatial/
Edit: As I'm digging, this seems to be focused on stereoscopic video as opposed to actual point clouds. It appears applications like cinematic mode use a monocular depth map, and their lidar outputs raw point cloud data.
secretsatan|1 month ago