Washington, Sep 6 (IANS) A team led by an Indian-American researcher has invented a new imaging technique that will lead depth-sensing cameras to work in bright light, especially sunlight, as well as in darkness.
Depth-sensing cameras, such as Microsoft’s Kinect controller for video games, are being widely used as 3D sensors.
The key is to gather only the bits of light the camera actually needs.
The researchers from Carnegie Mellon University (CMU) and University of Toronto created a mathematical model to help programme these devices so that the camera and its light source work together efficiently.
The new technology eliminated extraneous light or “noise” that will, otherwise, wash out the signals needed to detect a scene’s contours.
“We have a way of choosing the light rays we want to capture and only those rays,” said Srinivasa Narasimhan, associate professor of robotics at CMU.
“We don’t need new image-processing algorithms and we don’t need extra processing to eliminate the noise, because we don’t collect the noise. This is all done by the sensor,” he said.
One prototype based on this model synchronises a laser projector with a common rolling -shutter camera — the type of camera used in most smartphones – so that the camera detects light only from points being illuminated by the laser as it scans across the scene.
This not only makes it possible for the camera to work under extremely bright light or amid highly reflected or diffused light but also makes it extremely energy efficient.
“This combination of features could make this imaging technology suitable for many applications, including medical imaging, inspection of shiny parts and sensing for robots used to explore the moon and planets,” Narasimhan said.
It also could be readily incorporated into smartphones.
“Depth cameras that can operate outdoors could be useful in automotive applications such as in maintaining spacing between self-driving cars that are ‘platooned’ – following each other at close intervals,” Narasimhan said.
The researchers presented their findings at “SIGGRAPH 2015”, the International Conference on Computer Graphics and Interactive Techniques, in Los Angeles, last week.