Seeing Around Corners

At left, an image of a quarter scanned using non-line-of-sight imaging. At right, an image of a quarter scanned using line-of-sight imaging. (Source: CMU)

Computer vision researchers have demonstrated they can use special light sources and sensors to see around corners or through gauzy filters, enabling them to reconstruct the shapes of unseen objects. The researchers from Carnegie Mellon Univer­sity, the Uni­versity of Toronto and University College London said this technique enables them to reconstruct images in great detail, including the relief of George Washington’s profile on a U.S. quarter.

Ioannis Gkioulekas, an assistant professor in Carnegie Mellon’s Robotics Institute, said this is the first time researchers have been able to compute millimeter- and micro­meter-scale shapes of curved objects, providing an important new component to a larger suite of non-line-of-sight (NLOS) imaging techniques now being developed by computer vision researchers. “It is exciting to see the quality of recon­structions of hidden objects get closer to the scans we’re used to seeing for objects that are in the line of sight,” said Srinivasa Nara­simhan, a professor in the Robotics Institute. “Thus far, we can achieve this level of detail for only relatively small areas, but this capability will complement other NLOS techniques.”

Most of what people see – and what cameras detect – comes from light that reflects off an object and bounces directly to the eye or the lens. But light also reflects off the objects in other directions, bouncing off walls and objects. A faint bit of this scattered light ulti­mately might reach the eye or the lens, but is washed out by more direct, powerful light sources. NLOS techniques try to extract infor­mation from scattered light naturally occurring or otherwise and produce images of scenes, objects or parts of objects not otherwise visible.

“Other NLOS researchers have already demonstrated NLOS imaging systems that can understand room-size scenes, or even extract information using only naturally occurring light,” Gkioulekas said. “We’re doing something that’s comple­mentary to those approaches – enabling NLOS systems to capture fine detail over a small area.” In this case, the researchers used an ultrafast laser to bounce light off a wall to illu­minate a hidden object. By knowing when the laser fired pulses of light, the researchers could calculate the time the light took to reflect off the object, bounce off the wall on its return trip and reach a sensor.

“This time-of-flight technique is similar to that of the lidars often used by self-driving cars to build a 3D map of the car’s surroundings,” said Shumian Xin, a Ph.D. student in robotics. Previous attempts to use these time-of-flight calcu­lations to reconstruct an image of the object have depended on the bright­ness of the reflections off it. But in this study, Gkioulekas said the researchers developed a new method based purely on the geometry of the object, which in turn enabled them to create an algorithm for measuring its curvature.

The researchers used an imaging system that is effectively a lidar capable of sensing single particles of light to test the technique on objects such as a plastic jug, a glass bowl, a plastic bowl and a ball bearing. They also combined this technique with optical coherence tomo­graphy to reconstruct the images of U.S. quarters. In addition to seeing around corners, the technique proved effective in seeing through diffusing filters, such as thick paper.

The technique thus far has been demons­trated only at short distances – a meter at most. But the researchers speculate that their technique, based on geometric measure­ments of objects, might be combined with other, comple­mentary approaches to improve NLOS imaging. It might also be employed in other applications, such as seismic imaging and acoustic and ultrasound imaging. (Source: CMU)

Reference: S. Xin et al.: A Theory of Fermat Paths for Non-Line-of-Sight Shape Reconstruction, Conference on Computer Vision and Pattern Recognition CVPR 2019, Long Beach, USA

Link: Robotics Institute, Carnegie Mellon University, Pittsburgh, USA

Speak Your Mind

*