That wasp is one of the single most impressive pieces of computer graphics I have ever seen, and seemingly in contradiction also a fantastic piece of macro photography. The fact it renders in real time is amazing.
There was a discussion on here the other day about the PS6, and honestly were I involved in consoles/games production anymore I'd be looking seriously about how to incorporate assets like this.
Gaussian splats don't offer the flexibility required for your typical videogame. Since it isn't true PBR its lighting is kind of hardcoded. Rigging doesn't work well with it. And editing would be very hard.
It's good for visualizing something by itself, but not for building a scene out of it.
The page saturation made me think something was highlighted in the foreground that I simply couldn't see, leaving the whole page as shaded "in the background."
How does it capture the reflection (the iridescence of the fly's body)? It's almost as if I can see the background through the reflection.
I would have thought that since that reflection has a different color in different directions, gaussian splat generation would have a hard time coming to a solution that satisfies all of the rays. Or at the very least, that a reflective surface would turn out muddy rather than properly reflective-looking.
Is there some clever trickery that's happening here, or am I misunderstanding something about gaussian splats?
The color is view-dependent, which also means the lighting is baked in and results in them not being usable directly for 3D animation/environments (though I’m sure there must be research happening on dynamic lighting).
Sometimes it will “go wrong”, you can see in some of the fly models that if you get too close, body parts start looking a bit transparent as some of the specular highlights are actually splats on the back of an internal surface. This is very evident with mirrors - they are just an inverted projection which you can walk right into.
Gaussian splats can have colour components that depend on the viewing direction. As far as I know, they are implemented as spherical harmonics. The angular resolution is determined by the number of spherical harmonic components. If this is too low, all reflection changes will be slow and smooth, and any reflection will be blurred.
I wonder if there's research into fitting gaussian splats that are dependent on focus distance? Basically as a way of modeling bokeh - you'd feed the raw, unstacked shots and get a sharp-everywhere model back.
It’d be amazing to see a collab with the Exquisite Creatures Revealed artist. He preserves all kinds of insects and presents them in a way that highlights the color and iridescent effects nature offers. I was so blown away by the exhibit I went back. Artist: https://christophermarley.com/
The file sizes are impressive (as in small). I don't have the link right now but there are recent 4D splats that include motion (like videos but you can move around the scene) and they're in the megabytes.
Very cool, unfortunately I find the 3D completely unusable on mobile. The moment I touch it in orbit mode it locks to a southern pole view and whips about like crazy however I try rotate it.
It is remarkable that this is accomplished with relatively modest setup and effort, and the results are already great. Makes me wonder what you could get with high-end gear (e.g. 61mp sony a7rv and the new 100mm 1.4x macro) and capturing more frames. I also imagine that the web versions lose some detail to reduce size.
I presume these would look great on good vr headset?
I wonder if one could capture each angle in a single shot with a Lytro Illum instead of focus-stacking? Or is the output of an Illum not of sufficient resolution?
> Unfortunately, the extremely shallow depth of field in macro photography completely throws this process off. If you feed unsharp photos into it, the resulting model will contain unsharp areas as well.
Should be possible to model the focal depth of the camera directly. But perhaps that is not done in standard software. You still want several images with different focus settings
Pinhole lens + high light/long exposures to get sharp focus may help avoid some of the extra processing steps, he does mention he shot small aperture and that can cause diffraction effects and I guess that might be worse with pinhole though.
It all kind of depends on each other. More light, means longer recycle times on the speedlights or higher iso, more noise. Longer exposure isn't an option with speedlights, using continuous also has it's downsides, things may start to shake..
The bumblebee was my first attempt, the tracking didn't quite work, so you get ghosting. Others too have ghosting, usually happens when part of the insect moves, while shooting (which takes 4h). They dry and crumble after a while.
I'm not an expert and have not yet worked with splats, however I understood that unlike super-sharp-edged triangles they can represent complicatedly-transparent 'soft' phenomena like fur or clouds or similar that would ordinarily need to be rendered using possibly semi-transparent curves/sheathes (for fur/grass) or voxels for cloudy things like steam/mist. I gather splats can also represent and reproduce a limited amount of view-dependent specularity, as other commenters have said this is not dynamic and cannot easily deal with changing scene geometry or light sources.. still sounds like a fun research-project I make it do more in terms of illumination though!
It's just a simpler primitive I assume. Blurs and colors and angles are simpler than 3D geometries, so it's probably more aligned with working/thinking with other very low-level primitives with minimal dimensions (like the math of neural networks). I dunno, I'm kinda vibing a response here -- maybe someone else can give you a more authoritative answer
fidotron|4 months ago
There was a discussion on here the other day about the PS6, and honestly were I involved in consoles/games production anymore I'd be looking seriously about how to incorporate assets like this.
redox99|4 months ago
It's good for visualizing something by itself, but not for building a scene out of it.
iamflimflam1|4 months ago
sethammons|4 months ago
kaptainscarlet|4 months ago
1gn15|4 months ago
I'd also like to show my gratitude for you releasing this as a free culture file! (CC BY)
etskinner|4 months ago
I would have thought that since that reflection has a different color in different directions, gaussian splat generation would have a hard time coming to a solution that satisfies all of the rays. Or at the very least, that a reflective surface would turn out muddy rather than properly reflective-looking.
Is there some clever trickery that's happening here, or am I misunderstanding something about gaussian splats?
ricardobeat|4 months ago
Sometimes it will “go wrong”, you can see in some of the fly models that if you get too close, body parts start looking a bit transparent as some of the specular highlights are actually splats on the back of an internal surface. This is very evident with mirrors - they are just an inverted projection which you can walk right into.
Klaus23|4 months ago
abainbridge|4 months ago
meindnoch|4 months ago
unknown|4 months ago
[deleted]
Scene_Cast2|4 months ago
yorwba|4 months ago
https://dof-gs.github.io/
https://dof-gaussian.github.io/
jchanimal|4 months ago
smokel|4 months ago
[1] https://youtu.be/wEiBxHOGYps
pbronez|4 months ago
Aardwolf|4 months ago
gdubs|4 months ago
tobwen|4 months ago
iandanforth|4 months ago
slimbuck|4 months ago
mkl|4 months ago
Black text on a dark grey background is nearly unreadable - I used Reader Mode.
bix6|4 months ago
https://superspl.at/view?id=ac0acb0e
I believe this one is misnamed
danybittel|4 months ago
zokier|4 months ago
I presume these would look great on good vr headset?
singularity2001|4 months ago
https://superspl.at/view?id=1eacd61c wasp!
https://superspl.at/view?id=23a16d0e fly!
blincoln|4 months ago
I wonder if one could capture each angle in a single shot with a Lytro Illum instead of focus-stacking? Or is the output of an Illum not of sufficient resolution?
danybittel|4 months ago
petters|4 months ago
Should be possible to model the focal depth of the camera directly. But perhaps that is not done in standard software. You still want several images with different focus settings
Feuilles_Mortes|4 months ago
hmry|4 months ago
kreelman|4 months ago
I'd love to know the compute hardware he used and the time it took to produce.
singularity2001|4 months ago
cma|4 months ago
danybittel|4 months ago
cssinate|4 months ago
danybittel|4 months ago
stuckkeys|4 months ago
whiterook6|4 months ago
poslathian|4 months ago
danwills|4 months ago
unknown|4 months ago
[deleted]
jayd16|4 months ago
Likely triangles are used to render the image in a traditional pipeline.
patcon|4 months ago
arduinomancer|4 months ago
two_handfuls|4 months ago