Lensless Single-Exposure 3D-Imaging

The researchers used the DiffuserCam to reconstruct the 3-D structure of leaves from a small plant. They plan to use the new camera to watch neurons fire in living mice without using a microscope. (Source: L. Waller, UCB)

Researchers have developed an easy-to-build camera that produces 3D images from a single 2D image without any lenses. In an initial appli­cation of the tech­nology, the researchers plan to use the new camera, which they call DiffuserCam, to watch micro­scopic neuron activity in living mice without a microscope. Ulti­mately, it could prove useful for a wide range of appli­cations involving 3D capture. The camera is compact and inex­pensive to construct because it consists of only a diffuser placed on top of an image sensor. Although the hardware is simple, the software it uses to re­construct high reso­lution 3D images is very complex.

“The Diffuser­Cam can, in a single shot, capture 3D infor­mation in a large volume with high resolution,” said the research team leader Laura Waller, Univer­sity of California, Berkeley. “We think the camera could be useful for self-driving cars, where the 3D information can offer a sense of scale, or it could be used with machine learning algorithms to perform face detection, track people or auto­matically classify objects.” The researchers showed that the Diffuser­Cam can be used to recon­struct 100 million voxels, or 3D pixels, from a 1.3-megapixel image without any scanning. The researchers used the camera to capture the 3D structure of leaves from a small plant.

“Our new camera is a great example of what can be accomplished with compu­tational imaging – an approach that examines how hardware and software can be used together to design imaging systems,” said Waller. “We made a concerted effort to keep the hardware extremely simple and inex­pensive. Although the software is very compli­cated, it can also be easily replicated or distri­buted, allowing others to create this type of camera at home.” A DiffuserCam can be created using any type of image sensor and can image objects that range from micro­scopic in scale all the way up to the size of a person. It offers a resolution in the tens of microns range when imaging objects close to the sensor. Although the reso­lution decreases when imaging a scene farther away from the sensor, it is still high enough to distin­guish that one person is standing several feet closer to the camera than another person, for example.

The Diffuser­Cam is a relative of the light field camera, which captures how much light is striking a pixel on the image sensor as well as the angle from which the light hits that pixel. In a typical light field camera, an array of tiny lenses placed in front of the sensor is used to capture the direction of the incoming light, allowing compu­tational approaches to refocus the image and create 3D images without the scanning steps typically required to obtain 3D infor­mation.

Until now, light field cameras have been limited in spatial reso­lution because some spatial information is lost while collecting the directional information. Another drawback of these cameras is that the microlens arrays are expensive and must be customized for a particular camera or optical components used for imaging. “I wanted to see if we could achieve the same imaging capa­bilities using simple and cheap hardware,” said Waller. “If we have better algorithms, could the carefully designed, expen­sive microlens arrays be replaced with a plastic surface with a random pattern such as a bumpy piece of plastic?”

After experimenting with various types of diffusers and developing the complex algorithms, Nick Antipa and Grace Kuo, students in Waller’s lab, discovered that Waller’s idea for a simple light field camera was possible. In fact, using random bumps in privacy glass stickers, Scotch tape or plastic conference badge holders, allowed the researchers to improve on tradi­tional light field camera capa­bilities by using compressed sensing to avoid the typical loss of resolution that comes with microlens arrays.

Although other light field cameras use lens arrays that are precisely designed and aligned, the exact size and shape of the bumps in the new camera’s diffuser are unknown. This means that a few images of a moving point of light must be acquired to cali­brate the software prior to imaging. The researchers are working on a way to eliminate this calibration step by using the raw data for cali­bration. They also want to improve the accuracy of the software and make the 3D recon­struction faster.

The new camera will be used in a project at Univer­sity of Cali­fornia Berkeley that aims to watch a million indi­vidual neurons while stimu­lating 1,000 of them with single-cell accuracy. The project is funded by DARPA’s Neural Engi­neering System Design program to develop implan­table, biocom­patible neural inter­faces that could even­tually compen­sate for visual or hearing deficits. As a first step, the researchers want to create what they call a cortical modem that will “read” and “write” to the brains of animal models, much like the input-output activity of internet modems. The Diffuser­Cam will be the heart of the reading device for this project, which will also use special proteins that allow scientists to control neuronal acti­vity with light.

“Using this to watch neurons fire in a mouse brain could in the future help us under­stand more about sensory percep­tion and provide knowledge that could be used to cure diseases like Alzheimer’s or mental disorders,” said Waller. Although newly developed imaging techniques can capture hundreds of neurons firing, how the brain works on larger scales is not fully under­stood. The Diffuser­Cam has the potential to provide that insight by imaging millions of neurons in one shot. Because the camera is lightweight and requires no micro­scope or objective lens, it can be attached to a trans­parent window in a mouse’s skull, allowing neuronal acti­vity to be linked with behavior. Several arrays with overlying diffusers could be tiled to image large areas.

“Our work shows that computa­tional imaging can be a creative process that examines all parts of the optical design and algorithm design to create optical systems that accomplish things that couldn’t be done before or to use a simpler approach to something that could be done before,” Waller said. “This is a very powerful direction for imaging, but requires designers with optical and physics expertise as well as compu­tational know­ledge.” (Source: OSA)

Reference: N. Antipa et al.: DiffuserCam: lensless single-exposure 3D imaging, Optica 5, 1 (2018); DOI: 10.1364/OPTICA.5.000001

Link: Computational Imaging Lab (L. Waller), University of California, Berkeley, USA

Speak Your Mind

*