top | item 46828971

(no title)

slashdave | 1 month ago

Pixel by pixel, time-slice by time-slice, in a 2D+T convolution. You provide enough examples of videos of changing point-of-view, and the model reproduces what it is given.

discuss

order

in-silico|1 month ago

Yes, it reproduces what it is given by modelling the rules of physics, geometry, etc.

For example, image generators like stable diffusion carry strong representations of depth and geometry, such that performant depth estimation models can be built out of them with minimal retraining. This continues to be true for video generation models.

Early work on the subject: https://arxiv.org/pdf/2409.09144

slashdave|25 days ago

What? No, it does no such thing. Study the architecture. Pixels in. Pixels out.