Saturday, January 23, 2016

Depth map from the Lightfield Camera

The time I spent trying to match the model to the live footage last semester could have been saved if I had a portable 3D scanner. The Kinect generates a depth map, but it is too noiny, and you cann't catch more than one perspective. When used with Reconstruct Me, you can generate a model in real time that is usable as a shadow mask object. But the processing power of my tablet causes the frame rate to drop and the model generation to abort.
 I thought I would try a lightfield camera because the depth map is generated from the color images, and I thought it would give me a better registration. But the Lytro camera's depth map was much worse than the Kinect.

Lytro Depth Map
Image

 Depth Map

Overlay

The depth is created by a difference comparator function between the images captured across the array. This eliminates the IR beam and captured the depth map and image through the same lens, but the reliance on different pixel color and brightness to generate depth fails to register a depth if the surface is uniform. The software does some adjustments, and with a bit of editing the depth map can be used to create a virtual focus and aperture, but the depth resolution is low and fidelity is low.

Conclusion:
The lightfield camera's depth map is unusable for the creation of 3D models.

No comments:

Post a Comment