Seeing Hidden Objects Around Corners

Researchers have designed a laser-based system that can produce images of objects hidden from view. (Source: SCI Lab, Stanford U.)

A driver­less car is making its way through a winding neigh­borhood street, about to make a sharp turn onto a road where a child’s ball has just rolled. Although no person in the car can see that ball, the car stops to avoid it. This is because the car is out­fitted with extremely sensitive laser tech­nology that reflects off nearby objects to see around corners. This scenario is one of many that researchers at Stanford Univer­sity are imagining for a system that can produce images of objects hidden from view. They are focused on appli­cations for auto­nomous vehicles, some of which already have similar laser-based systems for detecting objects around the car, but other uses could include seeing through foliage from aerial vehicles or giving rescue teams the ability to find people blocked from view by walls and rubble.

“It sounds like magic but the idea of non-line-of-sight imaging is actually feasible,” said Gordon Wetz­stein, assistant professor of elec­trical engi­neering. The Stanford group isn’t alone in deve­loping methods for bouncing lasers around corners to capture images of objects. Where this research advances the field is in the extremely efficient and effec­tive algorithm the researchers developed to process the final image. “A substan­tial challenge in non-line-of-sight imaging is figuring out an efficient way to recover the 3-D structure of the hidden object from the noisy measure­ments,” said David Lindell, graduate student in the Stanford Compu­tational Imaging Lab. “I think the big impact of this method is how computa­tionally efficient it is.”

For their system, the researchers set a laser next to a highly sensi­tive photon detector, which can record even a single particle of light. They shoot pulses of laser light at a wall and, invi­sible to the human eye, those pulses bounce off objects around the corner and bounce back to the wall and to the detector. Currently, this scan can take from two minutes to an hour, depending on condi­tions such as lighting and the reflec­tivity of the hidden object. Once the scan is finished, the algo­rithm untangles the paths of the captured photons and, like the mythical image enhance­ment techno­logy of tele­vision crime shows, the blurry blob takes much sharper form. It does all this in less than a second and is so effi­cient it can run on a regular laptop. Based on how well the algorithm currently works, the researchers think they could speed it up so that it is nearly instan­taneous once the scan is complete.

The team is continuing to work on this system, so it can better handle the varia­bility of the real world and complete the scan more quickly. For example, the distance to the object and amount of ambient light can make it difficult for their techno­logy to see the light particles it needs to resolve out-of-sight objects. This technique also depends on analyzing scattered light particles that are intentionally ignored by Lidar systems currently in cars. “We believe the compu­tation algorithm is already ready for Lidar systems,” said Matthew O’Toole, a post­doctoral scholar in the Stanford Compu­tational Imaging Lab. “The key question is if the current hardware of LIDAR systems supports this type of imaging.”

Before this system is road ready, it will also have to work better in daylight and with objects in motion, like a bouncing ball or running child. The researchers did test their technique success­fully outside but they worked only with indirect light. Their techno­logy did perform parti­cularly well picking out retro­reflective objects, such as safety apparel or traffic signs. The researchers say that if the tech­nology were placed on a car today, that car could easily detect things like road signs, safety vests or road markers, although it might struggle with a person wearing non-reflec­tive clothing. “This is a big step forward for our field that will hope­fully benefit all of us,” said Wetzstein. “In the future, we want to make it even more prac­tical in the wild.” (Source: Stanford U.)

Reference: M. O´Toole et al.: Confocal non-line-of-sight imaging based on the light-cone transform, Nature, online 5 March 2018; DOI: 10.1038/nature25489

Link: Computational Imaging Lab (G. Wetzstein), Stanford University, Stanford, USA

Speak Your Mind