New York: American researchers, led by an Indian-origin robotics scientist, have developed a new imaging technology that addresses a major shortcoming of depth-sensing 3D cameras — the inability to work in bright light, especially sunlight.
Depth-sensing cameras, such as Microsoft’s Kinect controller for video games, have become widely used 3D sensors.
The key is to gather only the bits of light the camera actually needs.
“We have a way of choosing the light rays we want to capture and only those rays,” said Srinivasa Narasimhan, Carnegie Mellon University associate professor of robotics.
“We don’t need new image-processing algorithms and we don’t need extra processing to eliminate the noise, because we don’t collect the noise. This is all done by the sensor,” he added.
The researchers created a mathematical model to help programme these devices so that the camera and its light source work together efficiently, eliminating extraneous light, or “noise”, that would otherwise wash out the signals needed to detect a scene’s contours.
One prototype based on this model synchronizes a laser projector with a common rolling-shutter camera — the type of camera used in most smartphones — so that the camera detects light only from points being illuminated by the laser as it scans across the scene.
This makes it possible for the camera to work under extremely bright light or amidst highly reflected or diffused light. For instance, it can capture the shape of a lightbulb that has been turned on, and see through smoke.
This combination of features could make this imaging technology suitable for many applications, including medical imaging, inspection of shiny parts and sensing for robots used to explore the moon and planets.
It also could be readily incorporated into smartphones.
The researchers presented their findings at SIGGRAPH 2015, the International Conference on Computer Graphics and Interactive Techniques, in Los Angeles.