A team of engineers at Penn State is working on a new type of computing as traditional computing progress continues to slow down. The new computing method is based on the brain’s neural networks, which are extremely efficient.
The major difference between modern computing and analog computers, which the human brain belongs to, is that the former consists of two states: on-off or one and zero. On the other hand, an analog computer can have many possible states. An example used by the team is the comparison between a light that switches on and off, and one that has a variable amount of lighting.
According to the team leader and Penn State assistant professor of engineering science and mechanics Saptarshi Das, the study of brain-inspired computing has been taking place for over 40 years. In today’s world, digital computer limits are now forcing us to look toward high-speed image processing, which is the case for autonomous vehicles.
Big data is also playing a substantial role in the move toward neuromorphic computing, given its requirement for pattern recognition types that work well with the brain-based computing.
“We have powerful computers, no doubt about that, the problem is you have to store the memory in one place and do the computing somewhere else,” Das said.
By moving the data back and forth from memory to logic, a lot of energy is expensed, resulting in slower computing speeds. Until the computation and memory storage can be in the same place, a lot of space is required for this type of environment.
Thomas Shranghamer is a doctoral student in the group and first author of the paper.
“We are creating artificial neural networks, which seek to emulate the energy and area efficiencies of the brain,” Shranghamer said. “The brain is so compact it can fit on top of your shoulders, whereas a modern supercomputer takes up a space the size of two or three tennis courts.”
Reconfigurable Artificial Neural Networks
The team is working on artificial neural networks which can be reconfigured much like the neurons in the human brain. This takes place by applying a brief electric field to a sheet of graphene, which is a thick layer of carbon atoms. At least 16 possible memory states were demonstrated by the team.
“What we have shown is that we can control a large number of memory states with precision using simple graphene field effect transistors,” Das said.
The team would now like to commercialize the technology, and Das believes that there will be much interest in the work given the current shift to neuromorphic computing among the largest semiconductor companies.
The work coming from the team at Penn State is the latest example of the transition to these types of artificial neural networks. The human brain proves its value once again as an inspiration for many of the newest technologies, and it provides valuable insight into how experts can drastically reduce the size of modern supercomputers.