This approach is interesting in that it applies image-to-image diffusion modeling to autoregressively generate 3D consistent novel views, starting with even a single reference 2D image. Unlike some other approaches, a NeRF is not needed as an intermediate representation.
>> In order to maximize the reproducibility of our results, we provide code in JAX (Bradbury et al.,
2018) for our proposed X-UNet neural architecture from Section 2.3
This is one of the building blocks absolutely required for Full Self driving to ever work.
btw I like how it hallucinated bumper carrier mounted Spare Wheel based on the size of tires, heavy duty roof rack and bull bars while ground truth render was in a much less likely configuration of stock undercarriage frame hanger/no spare.
No, NeRFs are more interpretable because they directly model field densities which absorb and emit light. In this respect they are something akin to a neural version of photogrammetry. They don’t need to train on a large corpus of images, because they can reconstruct directly from a collection of posed images.
On the other hand, diffusion models can learn fairly arbitrary distributions of signals, so by exploiting this learned prior together with view consistency, they can be much more sample efficient than ordinary NeRFs. Without learning such a prior, 3D reconstruction from a single image is extremely ill-posed (much like monocular depth estimation).
I'm entirely unfamiliar with this, but is there a future where we can take a few pictures of something physical, and have AI generate a 3d model that we can then modify and 3d print?
Asking as someone who's dreadfully slow at 3d modeling.
You're probably looking for multiview photogrammetry. Also known as "structure from motion." https://github.com/alicevision/Meshroom is a good free tool for this, but the most popular one is probably AGISoft MetaShape.
dougabug|3 years ago
nl|3 years ago
In DreamFusion they do use a NeRF representation.
[1] https://dreamfusion3d.github.io/gallery.html
oifjsidjf|3 years ago
Nice.
OpenAI shitting their pants even more.
astrange|3 years ago
What they don't do is release the actual models and datasets, and it's very expensive to retrain those.
rasz|3 years ago
btw I like how it hallucinated bumper carrier mounted Spare Wheel based on the size of tires, heavy duty roof rack and bull bars while ground truth render was in a much less likely configuration of stock undercarriage frame hanger/no spare.
mlajtos|3 years ago
dougabug|3 years ago
On the other hand, diffusion models can learn fairly arbitrary distributions of signals, so by exploiting this learned prior together with view consistency, they can be much more sample efficient than ordinary NeRFs. Without learning such a prior, 3D reconstruction from a single image is extremely ill-posed (much like monocular depth estimation).
dicknuckle|3 years ago
Asking as someone who's dreadfully slow at 3d modeling.
dougabug|3 years ago
https://blogs.nvidia.com/blog/2022/09/23/3d-generative-ai-re...
https://research.nvidia.com/publication/2021-11_extracting-t...
foobarbecue|3 years ago
dr_dshiv|3 years ago
muschellij2|3 years ago