Deep learning, a subset of the broad field of AI, refers to the engineering of developing intelligent machines that can learn, perform and achieve goals as humans do. Over the last few years, deep learning models have been illustrated to outpace conventional machine learning techniques in diverse fields. The technology enables computational models of multiple processing layers to learn and represent data with manifold levels of abstraction, imitating how the human brain senses and understands multimodal information.
A team of researchers from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals like threadworms. This new AI-powered system is said to have the potential to control a vehicle with just a few synthetic neurons. According to the researchers, the system has decisive advantages over previous deep learning models. It is able to better address noisy input, and owing to its simplicity, its mode of operation can be explained in detail. It does not have to be regarded as a complex black box, but it can be understood by humans, researchers noted.
According to the report, artificial neural networks (ANNs), similar to human brains, comprise of numerous individual cells. When a cell is active, it conveys a signal to other cells. All these signals are received by the next cell and combined to decide whether this cell will become active as well. “For years, we have been investigating what we can learn from nature to improve deep learning,” Prof. Radu Grosu, Head of the Research Group, Cyber-Physical Systems at TU Wien said. “The nematode C. elegans, for example, lives its life with an amazingly small number of neurons, and still shows interesting behavioral patterns. This is due to the efficient and harmonious way the nematode’s nervous system processes information.”
As part of their test, the researchers chose a task: self-driving cars staying in their lane. For this, the neural network used camera images of the road as input and determined automatically whether to steer to the right or left. According to Alexander Amini, a Ph.D. student at MIT CSAIL, the new system encompasses two parts. The camera input is first processed by a convolutional neural network, which only perceives the visual data to excerpt structural features from incoming pixels. The network decides which parts of the camera image are interesting and significant to choose. It then passes signals to the crucial part of the network – a “control system” that then steers the vehicle.
As researchers experimented a new deep learning model with an autonomous vehicle, it allowed them to examine what the network focuses its attention on while driving. “Our networks focus on very specific parts of the camera picture: The curbside and the horizon. This behavior is highly desirable, and it is unique among artificial intelligence systems,” Ramin Hasani, Postdoctoral Associate at the Institute of Computer Engineering, TU Wien and MIT CSAIL said. Through their study, researchers found that interpretability and robustness are the two major advantages of the new deep learning model.