top | item 41275944

(no title)

W0lf | 1 year ago

I did work on this as part of my thesis quite a few years back at the university. One other optimization would be to process the points in parallel.

Regarding the coloring of each 3d point, it might be feasible to not use one camera image, but a weighted sum of all camera images that can see the same point in the scene. Each pixel color is then weighted with the scalar product of the points normal and the viewing direction of the camera. This would also regard for noise and specular reflections (which can mess up the original color).

discuss

order

shikhardevgupta|1 year ago

Yes, I am working on using numpy to do the projection using matrices so we dont have to loop over each point and project it individually. That should be a big boost.

The way I handle the different camera images is to simply see which one provides a lower depth and use - with the idea that if the camera is closer, it would provide better information. But what you are suggesting is pretty interestint. I'm going to try that as well.