Optical Network Accelerates Machine Learning

Researchers from the George Washington Uni­versity, the University of California, Los Angeles, and the startup Optelligence, have developed an optical convo­lutional neural network acce­lerator capable of processing large amounts of infor­mation, on the order of petabytes, per second. This innovation, which harnesses the massive parallelism of light, heralds a new era of optical signal processing for machine learning with numerous appli­cations, including in self-driving cars, 5G networks, data-centers, biomedical diag­nostics, data-security and more.

Illustration of a massively parallel amplitude-only Fourier neural network. (Source: GWU)

Global demand for machine learning hardware is drama­tically outpacing current computing power supplies. State-of-the-art electronic hardware, such as graphics processing units and tensor processing unit acce­lerators, help mitigate this, but are intrin­sically challenged by serial data processing that requires iterative data processing and encounters delays from wiring and circuit constraints. Optical alter­natives to electronic hardware could help speed up machine learning processes by simplifying the way information is processed in a non-iterative way. However, photonic-based machine learning is typically limited by the number of components that can be placed on photonic integrated circuits, limiting the inter­connectivity, while free-space spatial-light-modulators are restricted to slow programming speeds.

To achieve a breakthrough in this optical machine learning system, the researchers replaced spatial light modulators with digital mirror-based technology, thus developing a system over a hundred times faster. The non-iterative timing of this processor, in combination with rapid programma­bility and massive paralleli­zation, enables this optical machine learning system to outperform even the top-of-the-line graphics processing units by over one order of magnitude, with room for further optimization beyond the initial prototype.

Unlike the current paradigm in electronic machine learning hardware that processes infor­mation sequentially, this processor uses the Fourier optics, a concept of frequency filtering which allows for performing the required convo­lutions of the neural network as much simpler element-wise multi­plications using the digital mirror technology. “This massively parallel amplitude-only Fourier optical processor is heralding a new era for infor­mation processing and machine learning. We show that training this neural network can account for the lack of phase information”, said Volker Sorger, associate professor of electrical and computer engineering at the George Washington Univer­sity.

“Optics allows for processing large-scale matrices in a single time-step, which allows for new scaling vectors of performing convo­lutions optically. This can have significant potential for machine learning appli­cations as demons­trated here”, said Puneet Gupta, professor and vice chair of computer engineering at UCLA. “This prototype demons­tration shows a commercial path for optical acce­lerators ready for a number of applications like network-edge processing, data-centers and high-performance compute systems”, added Hamed Dalir, co-founder, Optelligence. (Source: GWU)

Reference: M. Miscuglio et al.: Massively parallel amplitude-only Fourier neural network, Optica 7, 1812 (2020); DOI: 10.1364/optica.408659

Link: AI Photonics & Nanophotonics Lab, Dept. of Electrical and Computer Engineering, George Washington University, Washington, USA

Speak Your Mind

*