Certainly a step above what a normal person could do in a 3D modeling program. The interesting thing about this type of stuff is that it would be harder for a professional to use those meshes as a base model; the topology doesn't lend itself to their process. For example -- the banana peels are puffy and two-dimensional. It would have to be completely restructured to be a convincingly peeled banana. So either the generated model has to be a finished product, or they are cost-effectively useless. CAD files are notoriously bad, and those are nice and mathematically easy to break down.
badsectoracula|2 years ago
Not really. Depending on the use case and adequate tools it can be much faster to the alternative of making these manually as meshes and textures.
Using the banana as an example, if this can be converted to a volumetric model (voxels) the puffied side can be shaved off using sculpting tools and the model be converted to a mesh much faster than making it from scratch. While the end result wouldn't be good for looking at it up front, it can be perfectly viable for background props in a game, especially something that is viewed from a bird's eye view or a drawn out third person perspective (though even up front it'll look better than what you see in some games[0] - and that is AAA).
In fact there have been several games using photogrammetry already to construct 3D models out of taking photos of real places from various angles and converting them to point clouds and then to meshes - which after that they need to be cleaned up by artists. This all takes time, is costly and needs specialized hardware and software and yet developers do it. The linked paper is about a method that significantly lowers those barriers while giving decent results even if they still need to be edited.
[0] https://i.imgur.com/0OeNkZu.jpeg
spookie|2 years ago
Gotta say, tools like these or NeRF will only revolutionise 3D modelling if they ever get topology right. That's the hard part.
andybak|2 years ago
I think it's equally likely that we'll end up with replacing meshes - or with a hybrid pipeline where non-mesh representations coexist with something else.
I am thinking about realtime mainly but I think the same thing might apply to "offline" rendering (does anyone still call it that?)
poulpy123|2 years ago