(no title)
W0lf | 1 year ago
Regarding the coloring of each 3d point, it might be feasible to not use one camera image, but a weighted sum of all camera images that can see the same point in the scene. Each pixel color is then weighted with the scalar product of the points normal and the viewing direction of the camera. This would also regard for noise and specular reflections (which can mess up the original color).
shikhardevgupta|1 year ago
The way I handle the different camera images is to simply see which one provides a lower depth and use - with the idea that if the camera is closer, it would provide better information. But what you are suggesting is pretty interestint. I'm going to try that as well.