Image Recognition in Nanoseconds

A picture is analyzed by the chip, which then provides the appropriate output signal. (Source: J. Symonowicz, TU Wien)

Automatic image recognition is widely used today: There are computer programs that can reliably diagnose skin cancer, navigate self-driving cars, or control robots. Up to now, all this has been based on the evaluation of image data as delivered by normal cameras – and that is time-consuming. Espe­cially when the number of images recorded per second is high, a large volume of data is generated that can hardly be handled. Scientists at TU Wien therefore took a different approach: using a special 2D material, an image sensor was developed that can be trained to recog­nize certain objects.

The chip represents an artificial neural network capable of learning. The data does not have to be read out and processed by a computer, but the chip itself provides information about what it is currently seeing within nano­seconds. Neural networks are arti­ficial systems that are similar to our brain: Nerve cells are connected to many other nerve cells. When one cell is active, this can influence the activity of neigh­bouring nerve cells. Artificial learning on the computer works according to exactly the same principle: A network of neurons is simulated digi­tally, and the strength with which one node of this network influences the other is changed until the network shows the desired behaviour.

“Typically, the image data is first read out pixel by pixel and then processed on the computer,” says Thomas Mueller. “We, on the other hand, integrate the neural network with its arti­ficial intelli­gence directly into the hardware of the image sensor. This makes object recog­nition many orders of magnitude faster.” The chip was developed and manu­factured at the TU Vienna. It is based on photo­detectors made of tungsten diselenide – an ultra-thin material consisting of only three atomic layers. The individual photo­detectors, the pixels of the camera system, are all connected to a small number of output elements that provide the result of object recog­nition.

“In our chip, we can speci­fically adjust the sensi­tivity of each individual detector element – in other words, we can control the way the signal picked up by a parti­cular detector affects the output signal,” says Lukas Mennel. “All we have to do is simply adjust a local electric field directly at the photo­detector.” This adap­tation is done externally, with the help of a computer program. One can, for example, use the sensor to record different letters and change the sensi­tivities of the individual pixels step by step until a certain letter always leads exactly to a corres­ponding output signal. This is how the neural network in the chip is configured – making some connections in the network stronger and others weaker.

Once this learning process is complete, the computer is no longer needed. The neural network can now work alone. If a certain letter is presented to the sensor, it generates the trained output signal within 50 nano­seconds – for example, a numerical code repre­senting the letter that the chip has just recognized. “Our test chip is still small at the moment, but you can easily scale up the technology depending on the task you want to solve,” says Thomas Mueller. “In principle, the chip could also be trained to distin­guish apples from bananas, but we see its use more in scientific experiments or other specialized appli­cations.”

The tech­nology can be usefully applied wherever extremely high speed is required: “From fracture mechanics to particle detec­tion – in many research areas, short events are investigated,” says Thomas Mueller. “Often it is not necessary to keep all the data about this event, but rather to answer a very specific question: Does a crack propa­gate from left to right? Which of several possible particles has just passed by? This is exactly what our technology is good for.” (Source: TU Wien)

Reference: L. Menneli et al.: Ultrafast machine vision with 2D material neural network image sensors, Nature 579, 62 (2020), DOI: 10.1038/s41586-020-2038-x

Link: Nanoscale Electronics & Optoelectronics (T. Müller), Technische Universität Wien, Vienna, Austria

Speak Your Mind

*