The CapoCaccia Workshops toward Neuromorphic Intelligence

Neuromorphic Engineering

Click here for an extraordinarily biased presentation of Neuromorphic Engineering by Wikipedia.

Neuromorphic engineering is concerned with the design and fabrication of artificial neural systems whose architecture and design principles are based on those of biological nervous systems. Neuromorphic systems of neurons and synapses can be implemented in the electronic medium CMOS (Complimentary Metal-Oxide Semiconductor) using hybrid analog/digital VLSI (Very Large Scale Integrated) technology.

The notion of neuromorphic engineering has appeared in various forms during the last few centuries: From the French and Swiss mechanical automata of the 18th to 19th Century (e.g. Vaucanson's duck), through the biological robots of the Fifties (Grey Walter's Turtle), to the early neural electrical circuits of the sixties and seventies (e.g. Fukushima in Japan, Mueller at the University of Pennsylvania, and Lettvin at MIT). The concept took root again at Caltech during the mid-eighties. This time, in the research of Carver Mead who had already made major conceptual contributions to the design and construction of digital VLSI systems. He recognized that the use of transistors for computation has changed very little from the time when John Von Neumann first proposed the architecture for the programmable serial computer. The bit-perfect strictly synchronous and largely serial processing strategy required by present computers is untenable in the long run, because it does not scale. Moreover, this design implies exponentially greater power consumption of larger systems; exponentially higher cost of constructing, testing, and maintaining elements that must be fault-free in operation.

The design of Biological neural computation is very different from that of modern computers. Neuronal networks process information using energy-efficient asynchronous, event-based, methods. Biology uses self-construction, self-repair, self-programming, and has learned how to flexibly compose complex behaviors from simpler elements. Of course, these biological abilities are not yet understood. But they offer an attractive alternative to conventional technology, and have enormous consequence for future artificial information processing and behaving systems. 

The challenge for neuromorphic engineering is to explore the methods of biological information processing in a practical electrical engineering context. Can existing CMOS VLSI technology be deployed in a novel way, that reflects more closely the neuronal approach to computation? And can one thereby reap some of the computational facility, speed, and efficiency that biological nervous systems bring to the solution of real-world problems? Conversely, can one, by engineering artifacts, gain new insights into how biology organizes computation?

Digital and Analog in Neuromorphic VLSI Systems

The majority of integrated circuits represent numbers as binary digits. Binary digits are used because it is possible to standardize the behavior of transistors so that their state can be determined reliably to a single bit of accuracy. The reliable bits can then be combined to encode variables to arbitrarily high precision. It is the combination of this precision, together with the synchronous, serial model of processing that has been the foundation of the long boom in digital electronics. One unnecessary consequence of this exuberance is that modern computers use deterministic high precision to deal even with real-world tasks whose variables can often be encoded with only a few bits.

For many such problems, particularly those in which the input data are ill-conditioned and the computation can be specified in a relative matter, biological solutions are many orders of magnitude more effective than those we have been able to implement using digital methods. This advantage is due principally to Biology's use of elementary physical phenomena as computational primitives, and to the representation of information by the relative values of analog signals rather than by the absolute values of digital signals. Typically, it is this style of processing that neuromorphic engineers explore. Their systems are large collections of communicating computational primitives implemented either in analog, or more commonly in hybrid analog/digital circuits.

Analog computing has a number of interesting features. Firstly, analog processors are inherently more dense than digital ones, because the individual electrical nodes of analog circuits can represent multiple bits of information. Secondly, unlike conventional digital circuits whose co-ordination is governed by a global clock,  analog circuits are naturally synchronized in physical time, as opposed to being synchronized against clock time. This property is particularly useful for systems that model and interact with real world processes; because if the time constants of the analog circuits are appropriately set, then their processing can be scaled to match those of the real-world processes with which they interact.

Unfortunately, an analog process is much more difficult to implement than its digital counterpart. Usually an analog processor must be purpose-built, and does not have the general character of digital systems. The analog circuits are difficult to implement because the physics of the material used to construct the circuits plays an important role in the solution of the problem. It is also difficult to control the physical properties of micron-sized devices such that their analog characteristics are well matched. This matching of analog device characteristics is a major difficulty facing an analog designer. And digital machines have an advantage over analog ones when high precision is required. The analog approach usually requires adaptive techniques to mitigate the effects of component differences. However, this kind of adaptation leads naturally to systems that learn about their environment.

Digital systems depend on a small set of primitives that can support universal logical computation: For example, any logic function can be implemented using only NAND gates. In principle, a similar mathematical universality is possible in analog circuits by using primitive circuits for addition and multiplication. However, the neuromorphic engineer is usually more concerned with identifying the primitives needed to emulate neuronal processing, rather than with mathematics per se.

For many crucial neuron-like operations, dedicated analog processors can be constructed much more compactly than their digital counterparts. Some neuromorphic primitives are obvious, such as the simple addition of currents by joining their conductors. Others provide useful generic functions, such as logistic and tanh circuits. Yet others are simple but sophisticated processors. These circuits are usually based on CMOS FET transistors, whose conductivity can be altered by an applied electric field. A Field Effect Transistor (FET) is a device in which the flow of current between its source and drain terminals is controlled by a voltage applied to its gate terminal. The relationship between the gate voltage and the current that flows across the transistor's channel is rather subtle. There are two primary regimes of behavior. In the "subthreshold" regime, where small voltages are applied between the gate and the source, the transistor current grows exponentially with the gate voltage\nocite{Liu_etal02}. In the "above-threshold" regime, where larger voltages are applied, the current grows more slowly. Conventionally, transistors are operated in the above-threshold regime, and in the extreme case of digital circuits only very large gate voltages are used so that the transistors are either fully off, or fully conducting.

By contrast, neuromorphic circuits typically exploit the otherwise unpopular subthreshold regime, because this regime offers some striking advantages for non-digital circuits\nocite{Mead90}. The exponential increase in transistor current with gate voltage means that the gain (loosely defined as the change in current for a given change in voltage) of the subthreshold transistor is very high, and gain is the lever of information processing. The exponential property can be used to compose interesting analog circuits with natural exponential and logarithmic properties that are useful for emulating neuronal properties. A further feature of subthreshold circuits is that the absolute currents flowing through the transistors are very small, and so these circuits operate at very low power. The down-side is that the signals are very small and so the circuits are susceptible to noise and fabrication variations.

All these interesting properties offer the possibility of constructing very low-power brain-like electronic circuits that operate in real-time. An important strategic consideration in this quest is that the basic CMOS VLSI technology does not have to be developed by the neuromorphic engineering community. The development of VLSI technology is fueled by the digital information technology (IT) market, and so those developments need only be applied to the needs of subthreshold analog neuromorphic engineering. The very same technology used to construct digital computers can also be used (in subthreshold) to construct computational primitives for systems of neuron-like processors. Unlike digital systems which have a relatively small number of general-purpose processors that process a command stream sequentially, the performance of neuromorphic systems depends on the parallel configuration of large numbers of these special purpose primitives.

Emulation versus Simulation

What advantages do these neuromorphic circuits have? One advantage is that they offer a medium in which neuronal networks can be emulated directly in hardware rather than simply simulated on a general purpose computer. However, it should be noted that the uses of emulation are not the same as those of simulation. Digital simulation is generally used to explore the quantitative behavior of neuronal systems\nocite{Koch_Segev89}. Because they are composed of large numbers of non-linear elements and have a wide range of time constants, their mathematical behavior can rarely be solved analytically. But, the speed of simulation is limited by the shortest time constant of the problem, and so simulation performance slows dramatically as the number and degree of coupling of elements increases.  

By contrast, neuromorphic emulations operate in real-time, and the speed of the network is independent of the number of neurons or their coupling. However, analog circuits provide only a good qualitative approximation to the exact performance of simulated neurons.  Moreover, the design of special purpose hardware is a significant investment, particularly if it is analog hardware, since analog VLSI (aVLSI) design remains very much an art form.  Designing robust novel circuits depends on engineers with considerable experience, preferably working in small groups with sufficient collective knowledge to monitor and criticize one another's design assumptions and circuit layouts. So, in their present form, neuromorphic circuits are not suitable for quantitative simulation. 

On the other hand, where the electronic neural circuits do provide a tangible advantage is in the investigation of questions concerning the strict real-time interaction of the system with its environment, possibly taking sensory input from neuromorphic sensors such as silicon retinas\nocite{Lichtsteiner_etal06} or silicon cochleas\nocite{Chan_etal06}. Neuromorphic retina and cochlea sensors represent examples of extremely successful systems built with these design principles. Neuromorphic retinas are vision sensors that can adapt in real time to local changes in illumination, over a wide range of intensities, and that reproduce many of the functionalities of the retina's outer-plexiform layer. Thanks to their adaptation and real-time response properties, they are currently being applied to machine vision tasks that are difficult to solve using conventional approaches. These include for example traffic monitoring in open uncontrolled environments such as freeways or fast robotics, where the low latency and sparsity of AER sensor data are very advantageous.  Similarly, silicon cochleas convert auditory signals into sequences of spikes generated by populations of silicon neurons arranged tonotopically along the frequency axis, similar to the inner-hair cells located along the basilar membrane of real cochleas.  Also these devices have promising application domains, both as alternative low power solutions for cochlear implants, and as computational devices for engineering applications.

Neurons in Silicon

The last half a century of intensive investigations in neuronal processing has revealed the rich membrane biophysics of synapses,
dendrites, somata and axons. But, we still do not have a clear understanding of the formal processing characteristics of neurons
and their networks. That is, we do not have a more sophisticated abstract model for the neuron that could replace the McCulloch-Pitts type simplicity with one that has both richer biophysical verisimilitude and a clear computational specification. Such a new neuronal model would provide the necessary foundation for understanding the global nature of spike-based computations supported by neuronal circuits. In addition to its relevance for neuroscience, this model would also be relevant for future technology. In their contribution to this quest the relatively small neuromorphic engineering community has during the last decade steadily developed a robust infrastructure for studying event-driven computation in networks of neuron-like elements. They have developed circuits for neurons, dendrites, and synapses as well as  general methods for event-driven communication between neurons distributed over possibly many chips. It is now possible to assemblequite complex systems of such neurons.

Integrate-and-fire models

A cornerstone of these systems is the integrate and fire (I&F) neuron. I&F neurons integrate presynaptic input currents and generate a voltage pulse analogous to an action potential when the integrated voltage reaches a threshold. Many variants of these circuits had been built during the 50’s and 60’s using discrete electronic components. The first simple VLSI version was probably the Axon Hillock circuit, built by Mead in the late eighties. In this circuit, a capacitor that represents the somatic membrane capacitance integrates current input to the neuron. When the capacitor voltage crosses a threshold it is reset by a positive feedback loop. More recent models include additional neural characteristics, such as spike-frequency adaptation properties and refractory period mechanisms.

Conductance-based models

These VLSI I&F neurons provide convenient approximations to the behavior of neuronal somata without committing to the overhead of emulating the plethora of voltage dependent conductances and currents present in real neurons. But, if necessary, these conductances can be emulated using subthreshold CMOS circuits. For example, the silicon neurons of Douglas and Mahowald are composed of connected compartments, each of which is populated by modular sub-circuits that emulate particular ionic conductances. The dynamics of these types of circuits is qualitatively similar to the Hodgkin-Huxley mechanism without implementing their specific equations. 

In their Hodgkin-Huxley neuron circuit, the membrane capacitance is connected to a “transconductance amplifier” that implements a conductance term, with magnitude modulated by the bias voltage Gleak. This “passive” leak conductance couples the membrane potential to the potential of the ions to which the membrane is permeable (Eleak). Similar strategies are used to implement the “active” sodium and potassium conductance circuits. Transconductance amplifiers implement simple first-order low-pass filters to provide the kinetics. A current mirror is used to subtract the sodium activation and inactivation variables (INaon and INaoff ), rather than multiplying them, as in the Hodgkin-Huxley formalism. Additional current mirrors half-wave rectify the sodium and potassium
conductance signals, so that they are never negative.Next to the sodium and potassium circuits, several other conductance modules have been implemented using these principles, for example: persistent sodium current, various calcium currents, calcium-dependent potassium current, potassium A-current, nonspecific leak current, and an exogenous (electrode) current source. The prototypical circuits can be modified in various ways to emulate the particular properties of a desired ion conductance. For example, some conductances are sensitive to calcium concentration rather than membrane voltage and require a separate voltage variable representing free calcium concentration. Synaptic conductances are sensitive to ligand concentrations, and these circuits require a voltage variable representing neurotransmitter concentration. This array of ionic conductances, with their different dependencies and time constants, gives rise to statedependent dynamics within the compartments. These circuits can be composed to model in detail the electro-physiological behavior of several types of neurons, for example pyramidal cells.

Axons, action potentials, and the Address-Event representation

Biological neurons communicate with one another using dedicated point-to-point axons. The all-or-nothing action potential can be translated into a discrete level signal, that is robust against noise and inter-chip variability, and can be conveniently transmitted between chips\nocite{Mahowald94} and easily interfaced to standard logic and computer systems. In the address-event representation (AER) method developed by Mahowald and others, the action potentials generated by a particular neuron are transformed into an address that identifies the source neuron, and broadcast on a common data bus.  Many silicon neurons can share the same bus because switching times in CMOS and on the bus are much faster than the switching times of neurons.  Events generated by silicon neurons can be broadcast and removed from a data bus at frequencies greater than a MHz.  Therefore, more than 1000 address-events could be transmitted in the time it takes one neuron to complete a single action potential.  The addresses are detected by the target synapses which then initiate their local synaptic action (see Fi

Event-based digital encoding methods are convenient for configuring large networks, because the network connectivity can be implemented by a programmable digital address mapper. The mapper receives an address event from a source neuron, and translates it to one or more target addresses that are transmitted on pre-synaptic bus. Because the mapping table is programmable, arbitrary network topologies can be set, and modified dynamically. This method greatly facilitates the configuration and testing of large multi-chip systems.

Synapses in Silicon

Silicon synapses come in many forms, depending on the underlying model, and the circuit implementation\nocite{Bartolozzi_Indiveri07b}. The models range from simple current source models to more complicated conductance-based models including approximations of AMPA and NMDA (see NMDA) excitatory synapses; and potassium-mediated and chloride-mediated (shunting) inhibitory synapses. An example of a synaptic circuit that can reproduce the exponential dynamics observed in real synapses.

Plasticity and learning

One of the key properties of biological synapses is their short and long term plasticity. Short-term plasticity produces dynamic modulation of the synaptic strength by the timing of the input spikes alone; while long-term plasticity produces sustained changes in synaptic strength that are induced by the correlations in the spiking activity of the pre- and post-synaptic processes.  

Circuits have been developed that emulate the short time-scale synaptic depression; and also various learning circuits that implement long term plasticity of synapses use spike timing information and/or rate information\nocite{Fusi_etal00,Indiveri_etal06}. Spike-timing dependent plasticity (STDP) mechanisms are particularly well suited to the VLSI neuromorphic networks described above, because these networks process AER signals. In STDP the precise timing of spikes generated by the pre- and post-synaptic neurons have an important role in shaping the synaptic efficacy.  If a pre-synaptic spike arrives at the synaptic terminal before a post-synaptic spike is emitted, during a window of causality, the synaptic efficacy is increased. Conversely, if the post-synaptic spike is emitted soon before the pre-synaptic one arrives, the synaptic efficacy is decreased.  Several modeling studies have developed learning algorithms based on STDP, and demonstrated how systems that use these types of algorithms can carry out complex information processing tasks.  The excitatory synaptic circuit of Indiveri  implements both long and short term plasticity.

Memory and synaptic weight storage

There are a number of practical problems that slow down the development of large scale, distributed, massively parallel networks of VLSI I\&F neurons.  Three of the most important ones are: 1) how to access the individual synapses of the network for providing input signals, and how to read from each neuron for generating output signals; 2) how to set and/or store the weights of individual synapses in the network; and 3) how to (re-)configure the network topology on the same chip. Of course, these problems arise directly from the fundamental difference in architecture between conventional computers and neural networks.  Conventional computers have a single, or small number of processors connected to a random access memory. This global access means that the state of the machine can be conveniently loaded and examined. By contrast, in neuronal networks the memory and processing is massively distributed, and co-localized at the synapses. In this case direct loading and inspection is not possible, except at at the huge cost of providing duplicate access lines.  Long-term plasticity circuits alleviate these problems slightly (for both biology and neuromorphic VLSI) by allowing the weights of synapses to be set automatically, without requiring dedicated access to individual synapses.  But the difficulty of experimental observation and control remains for these circuits.

Basic considerations about the relation between precision and required silicon area suggest that biologically-realistic low-resolution weights could also be stored as analog values as voltages across capacitors. But conventional CMOS capacitors are subject to leakage, and so their voltages will decay over seconds if not refreshed. Alternatively, the analog values can be stored using non-volatile technology similar to that used in electrically erasable programmable memory (EEPROM).  In this case, the analog value is stored as charge on a ``floating gate'', that is the gate of a FET sandwiched between two layers of (perfect) oxide insulator. Charge is added or removed from the floating gate by Fowler-Nordheim tunneling and impact-ionized hot-electron injection.  The uncontrolled decay of charge from floating gates is negligible, so that learned synaptic weights can be retained for decades, even when the power to the circuits is off. This exciting technology is still under development, but simple structures such as single transistor synapses, and examples of networks that implement learning algorithms based on these circuits have been demonstrated\nocite{Cauwenberghs_Bayoumi99}. The problem of synaptic weight storage would be simplified considerably if only one binary bit, rather than an analog value, need be stored. In fact, it has been demonstrated that networks of sufficiently large numbers of \emph{binary} synapses are adequate for any memory task\nocite{Fusi_etal00}. Circuits for such binary synapses have been proposed that bypass the need for having specialized structures for non-volatile analog memory within each synapse.

Multi-Chip Neural Networks

Several examples of successful multi-chip networks of spiking neurons have been demonstrated during recent years.  Using present CMOS technology, it is possible to implement of the order of hundreds of neurons and thousands synapses per square millimeter of silicon. In principle networks of this type can be scaled up to any arbitrary size, but in practice the network size is limited by the maximum silicon area and AER bandwidth available.  Given the current speed and specifications of the AER interfacing circuits\nocite{Boahen98} and the availability of present silicon VLSI technology, the network size could be increased by at least two orders of magnitude. It is likely that large neuronal networks, such as those of the neocortex, are dominated by local connectivity, with only a relatively small fraction of long range connections.  In this case it may be possible to make similarly large-scale VLSI networks, in which multiple regional AER busses with the same address spaces carry local event traffic, and are inter-connected by sparser long range traffic between local domains. However before testing those connectivity limits, there is more immediate interesting work to be done with these multi-chip networks. For example, existing methods can be used for investigating complex spike based learning algorithms in real-time.  And, these studies are all the more interesting when considering problems of adaptation and learning in neuronal networks interfaced to neuromorphic AER sensors such as silicon retinas or silicon cochleas.

Impact of Neurobiology on Computer Engineering

Conventional engineering computing systems are beginning to face challenges that have many points in common with those faced by neuromorphic and biological neural systems: As VLSI technology scales to sub-micron feature size, power consumption per square micron starts to become a serious limiting factor; and single transistors start to behave as unreliable stochastic devices. Under these conditions insights obtained from building low-power neuromorphic circuits, and biologically inspired methods for computing reliably with unreliable components can bring crucial knowledge to advanced computing technologies.  Similarly, the current trend towards multi-core digital processors in the IT industry will continue and will require understanding of massively parallel computation, which is also a central question of neuroscience. The principles of unclocked, real-time asynchronous event-driven computation are still far from clear, but it seems likely that this form of communication and processing lies at the heart of Biology's ability to achieve effective collective behavior from massively distributed, weakly connected, and relatively slow, neuronal processors. We expect that the exploration of neuromorphic systems will continue to contribute strongly to solving this intriguing puzzle.