MultiSense recalculates depth for every single pixel in the field of view for every frame that it captures. This results in extremely high refresh rates which is a major benefit over LIDAR based 3D sensors in large dynamic environments. One consideration here is that depth information is dependent on the ability of the camera to find features on the object and match them in the left and right images. Object edges that contrast with the background will generally be seen, but surfaces that are consistently colored can cause dropouts in the depth information because the camera is unable to find any features on the surface. Examples of this would be indoor painted walls. This can be overcome using an infrared pattern projector to artificially create features on what was once a featureless surface. I'm attaching example images of a camera looking at an indoor scene with and without a pattern projector. When using a camera outdoors this tends to be less of an issue because colors vary much more and it is easier to see features. Surfaces such as grass, concrete, asphalt, and gravel are very easy for the camera to find matching features. The edges of objects are almost always seen as long as they contrast with the background.
Have more questions? Submit a request