(no title)
MathYouF | 3 years ago
Right now Blender work still involves a lot of tedium, mostly related to topology. A lot of upcoming 3D ML applications also work considerably better when using SDF instead of mesh representations. I wouldn't be surprised to see this form of 3D modeling take off to a significant degree because of those two factors.
preommr|3 years ago
There was the original metaballs. But more recently there's also been sdf addons using geometry nodes [1] that mimic the same workflow - with my guess being that it uses voxels to generate the final polygon mesh that blender needs since it's not a fully sdf editor. Although, while I was googling this, I did find someone that managed to do it by using pure shaders [2] which is pretty cool.
Also, thanks for actually explaining that. I've seen a few examples of this kind of "clay like" sculpting approach that tries to make it easier for artists. Adobe's Modeler uses sdfs for example.
[1] https://blenderartists.org/t/geometry-nodes-in-3-3-sdf-prese...
[2] https://www.youtube.com/watch?v=sqDCPW85tuQ
LarsDu88|3 years ago
Interestingly most folks think of 3d modeling as quad modeling/subdivision surfaces, but Toy Story 1 was done completely with NURBs (also supported by blender)
dorkwood|3 years ago
andybak|3 years ago
Ideally - the end result of an SDF pipeline is pixels. Going back to polygons throws away much of the advantage of SDFs. Raymarching is costly and rarely used in realtime engines but Blender isn't realtime so rendering SDFs directly would probably be viable.