top | item 41244411

(no title)

jorgemf | 1 year ago

Gaussian splatting transform images to a cloud points. GPUs can render these points but it is a very slow process. You need to transform the cloud points to meshes. So basically is the initial process to capture environments before converting them to 3D meshes that the GPUs can use for anything you want. It is much cheaper to use pictures to have a 3D representantion of an object or environment than buying professional stuff.

discuss

order

andybak|1 year ago

> Gaussian splatting transform images to a cloud points.

Not exactly. The "splats" are both spread out in space (big ellipsoids), partially transparent (what you end up seeing is the composite of all the splats you can see in a given direction) AND view dependent (they render differently depending on the direction you are looking.

Also - there's not a simple spatial relationship between splats and solid objects. The resulting surfaces are a kind of optical illusion based on all the splats you're seeing in a specific direction. (some methods have attempted to lock splats more closely to the surfaces they are meant to represent but I don't know what the tradeoffs are).

Generating a mesh from splats is possible but then you've thrown away everything that makes a splat special. You're back to shitty photogrammetry. All the clever stuff (which is a kind of radiance capture) is gone.

Splats are a lot faster to render than NeRFs - which is their appeal. But heavier than triangles due to having to sort them every frame (because transparent objects don't composite correctly without depth sorting)

vessenes|1 year ago

Minor nit — in what way do splats render differently depending on direction of looking? To my mind these are probabilistic ellipsoids in 3D (or 4D for motion splats) space, and so while any novel view will see a slightly different shape, that’s an artifact of the view changing, not the splat. Do I understand it (or you) correctly?