Neuromorphic Computers: Being inspired by biology can help overcome the limitations of modern computer architectures.
We had already mentioned neuromorphic computers in the article about specialized hardware for AI. The concept of neuromorphic computers is not exactly new: in fact, it was coined in the ’80s by C. Mead, then “made official” in an article that later became famous: Neuromorphic Electronic Systems.
Put simply, a neuromorphic computer is a computer built with an architecture capable of simulating the functioning of the brain. The reason why the need to take this path, as well as for reasons of scientific simulation, is to overcome the intrinsic limits of modern architecture, which we are quickly approaching.
Limitations of modern architectures
Today almost all computers broadly work according to the Von Neumann architecture: a CPU, one or more memory devices (RAM, Hard Disk etc.), and a bus with various channels.
In this type of architecture, data is constantly transferred back and forth through the bus from the CPU to memory and vice versa. The data flow is almost constant, and it is timed by a system clock, which nowadays is of the order of GHz, or billion cycles per second.
This type of architecture, although very successful in today’s computers, has a weak point, that is, the bottleneck that is created between the CPU and storage devices, since they operate at drastically different speeds. This type of problem is somehow mitigated, but not solved, with the implementation of caching mechanisms and maximizing the transfer speed of the bus. Another problem is that the progressive miniaturization, which has followed Moore’s law up to now, is now approaching its physical limits These limits are the reason why we will need to find different ways to keep increasing the computing power, which is becoming more and more necessary to train today’s neural networks.
Neuromorphic computers to the rescue
The type of problems listed is becoming increasingly pressing in the era of Big Data and of Deep Learning, with its increasingly wider and complex neural networks. So what to do? One possible way would be to go with quantum computers (we have already talked about it in the Hitchhiker’s guide to quantum computers), which are promising, but still in a very early stage and not yet mature enough for general-purpose solutions. So, in the end, the solution to these problems could be just to get inspired by biology, and build artificial systems that work like the human brain: neuromorphic computers.
Neuromorphic processing is based on some key points:
- Memory and calculation in the same place: no longer two separate systems as in the Von Neumann architecture, but many simple “processors” (inspired by neurons).
- Parallelism: neural networks built on this principle are designed to be intrinsically capable of hard parallelism.
- Extensive connectivity: as in the human brain, the nodes are densely connected locally (in the same structure), but also through “long” connections with nodes of other structures.
- Spike processing: the various nodes communicate trough spikes, inspired by biological action potentials.
Spiking Neural Networks
A crucial feature of neuromorphic processing is the use of “spiking” neural networks, operationally more similar to their biological counterparts. In the “traditional” neural networks, such as perceptron or convolutional networks, all the neurons of a given layer “shoot” a real value together for each propagation cycle. The value for each neuron depending on the values received at the input and the function activation.
Instead, in spiking networks, the neurons instead of firing (or not) at each propagation cycle, shoot only when its activation exceeds a certain threshold. In other words it follows the same law of “all or nothing” of biological action potentials.
Because of this law, these signals can be safely considered as digital, where we can modulate the frequency and the time frame with which they are fired. Furthermore, the trigger also depends on the nature of the synapses (connections between neurons), which can be excitatory or inhibitory. The advantage of this type of networks lies in computational simplicity (the neurons only make simple algebraical sums).
That said, even if the type of calculation in these networks is theoretically simple, the problem is that it can become complex to implement it with traditional architectures. In fact, correctly represent the trend of the signals over time (frequency), it would be necessary to implement differential equations, whose complexity would undermine the initial advantage.
However, utilizing appropriate architectures, based on the use of memristor (a kind of resistance with memory) we can implement circuits that can effectively simulate biological synapses. These architectures can be implemented with components that are relatively inexpensive, and at a fraction of the energy needed by their traditional counterparts.
Notable implementations
The development of neuromorphic computers is slowly catching on, even if the technology is not yet mature, and just two years ago investments had already exceeded $400 million in North America and the European Union (see below). Below are some of the best known implementations.
Human Brain Project
The Human Brain project is a huge research project that aims to accelerate research in the field of neuroscience. One of the areas of the project is “Silicon Brain“, where SpiNNaker (see below), and BrainScaleS (an architecture designed to simulate the plasticity of neural connections) merged.
SpiNNaker
Based on ARM processors, each SpiNNaker chip features a processor, an SDRAM memory module, and a router capable of conveying the spikes messages to the other chips. From a software point of view, the programming paradigm of SpiNNaker is a simple event-driven model, and the project provides dedicated tools. Applications do not control the execution flow, but can only indicate the functions to be performed when a specific event occurs, such as the arrival of a packet, or an elapsed time. The SpiNNaker application kernel SARK (SpiNNaker Application Runtime Kernel) monitors the execution flow and schedules/routes the calls to the functions. The state of the art is represented by the SpiNNaker Machine, with more than 500k processors.
TrueNorth
This architecture was developed in 2014 by IBM, as part of the SyNAPSE program. TrueNorth, like other architectures of this kind, works with spiking neural networks. TrueNorth proved to be particularly well versed in the field of computer vision, and just last year IBM announced a collaboration with AirForce Research Lab to build a 64-chip array. The idea is to bring added value to the field of applications such as driverless cars, satellites, and drones.
Mobile
The implementation of computer vision technologies could not be missing in smartphones, and there are several top range phones that already have NPU (Neural Processing Unit) processors mounted. Uses are still limited, but the scenario could change quickly since the technology is there.
Starting with Android 8.1, the NN-API will be made available, through which the developers will be able to access the NPU without knowing its architectural details. Google has also released TensorFlow Lite, which fully supports it.
Google Pixel 2XL mounts Pixel Visual Core on board, unlocked only with Android 8.1, although at the moment it is used only for HDR+ applications. Qualcomm implemented AI in its Snapdragon 835 and 845 and will work together with Baidu to improve speech recognition.
Of course, also Apple with its A11 and Huawei with Kyrin couldn’t miss the party.
As already mentioned, at the moment the uses of these NPUs are rather limited, but we are at the beginning, and the sector is booming.
Notes
A clock rate of 1 GHz is equivalent to a frequency of about 10 cycles per nanosecond.
This despite the research is continuing to push this physical limit further, for example through new graphene inductors, or the exploitation of quantum effects in transistors.
The reason why this kind of architecture is not yet widespread despite the idea is over 40 years old, is due to the same reason that the progress of artificial intelligence stopped for more than 20 years, after the initial promises, that is the technology it was not ready yet. Today with the renewed interest in artificial intelligence and neuroscience, combined with technological maturation, neuromorphic computers are coming back into vogue.
Links
Spiking Neural Networks, the Next Generation of Machine Learning
Neuromorphic Chips Are Destined for Deep Learning—or Obscurity
Qualcomm-backed startup announces AI processor family
Researchers create organic nanowire synaptic transistors that emulate the working principles of biological synapses
Introduction to Neuromorphic Computing Insights and Challenges (pdf)
What do made-for-AI processors really do?
Neuromorphic Computing Could Build Human-Like Machine Brains
Machine learning and AI: How smartphones get even smarter
Neuromorphic Chips: a Path Towards Human-level AI
NEUROMORPHIC COMPUTING CHIP – THE NEXT EVOLUTION IN ARTIFICIAL INTELLIGENCE
Artificial synapse for neuromorphic chips
A Memristor based Unsupervised Neuromorphic System Towards Fast and Energy-Efficient GAN (pdf)
Classifying neuromorphic data using a deep learning framework for image classification (pdf)
Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain (pdf)
Convolutional networks for fast, energy-efficient neuromorphic computing (pdf)
Projects
The Human Brain Project
THE BLUE BRAIN PROJECT – A SWISS BRAIN INITIATIVE
NEST Initiative – The Neural Simulation Technology Initiative
Gromacs Gromacs
STEPS – STochastic Engine for Pathway Simulation
Project NEURON – Novel Education for Understanding Research on Neuroscience
Neuromem Smart – Hardware neurons inspired by biology