I am working on putting together a mobile 3D scanner to make a quick 3D model of an environment I am shooting for shadows and reflections. I have the 3D construction working on my PC, and the 3D scanner working on my tablet, but my tablet drops frames which causes the 3D construction software to abort.
I thought I would try to use the depth map and photo to make a 3D model.
I spend several hours trying to do this tutorial in Maya 2016, and the changes are so drastic, I couldn't figure it out. The input output controlls
I found a tutorial to do it with Photoshop, and it worked, but this was the result. There is a descrepency between the focal length of the camera lens and the depth map. I assume the Kenect SDK software compensates.
This is what the live capture looks like.
Long shot point cloud across the lab (about 25 feet)
Long shot point cloud across the lab from above (about 25 feet)
Long shot with texture across the lab from above (about 25 feet)
Long shot with texture across the lab from above (about 25 feet)
Close up of the chairs
Long shot point cloud across the lab from above (about 25 feet)
Close Point cloud
Close up color
The results are not great. The software that creates a model from the Kinect clearly does a lot of processing to eliminate the noise. My tablet is not up to the task, and classroom services no longer loans laptops. I have a lightfield camera that I am going to have to create the model from photographs and the rough depth map models as a reference.
No comments:
Post a Comment