Abstract :
This paper will describe a numerical approach to simulating adaptive biologically-plausible spiking neural networks, with the primary application being simulating the early stages of mammalian vision. These are time dependent neural networks with a realistic Hodgkin-Huxley (HH) [1] model for the neurons. The HH model uses four nonlinear, coupled ordinary differential equations for each neuron. In addition, the learning used here is biologically plausible as well, being a Hebbian approach based on spike timing dependent plasticity (STDP) [2]. To make the approach very general and flexible, neurogenesis and synaptogenesis have been implemented, which allows the code to automatically add or remove neurons (or synapses) as required. Traditional rate-based and spiking neural networks have been shown to be very effective for some tasks, but they have problems with long term learning and “catastrophic forgetting.” Typically, once a network is trained to perform some task, it is difficult to adapt it to new applications. While adaptive rate-based NN´s have been developed, our approach is one of the few that uses a biologically plausible system. To do this properly, one can mimic processes that occur in the human brain. To be effective, however, this must be accomplished while maintaining the current memories. In this paper we will describe a new computational approach that can continually learn and grow. Neurons and synapses can be automatically added and removed from the simulation while it runs. The learning algorithm [3] uses a combination of homeostasis of synapse weights, spike timing, and stochastic forgetting to achieve stable and efficient learning. The approach is not only adaptable, but it is also scalable to very large systems (billions of neurons). Also, it has the capability to remove synapses which have very low strength, thus saving memory. There are several issues when implementing neurogenesis and synaptogenesis in a spiking code. When seve- - ral synapses die, it may lead to a neuron which has no synapses and thus requires its removal. Conversely, a neurons death may require updating of synaptic information of all the neurons it was connected to. More importantly, rules need to be developed which help determine when the network should be modified. These issues and efficient ways to address them will be discussed. In addition, the computer performance and memory requirements of several neuron models (integrate and fire, Izkehvich, and Hodgkin-Huxley) will be discussed [4]. The software is written in C++ and is efficient and scalable, it also requires minimal memory per neuron and synapse. A 2.4 GHz MacBook laptop ran 100 million synapses (1 million neurons) for 0.1 simulated seconds (100,000 timesteps) in 5.2 hours (a mouse has roughly 10 million neurons and 81 billion synapses [5]). This case required only 200 MBytes of memory.
Keywords :
Hebbian learning; differential equations; neural nets; vision; C++; Hebbian learning; Hodgkin-Huxley neurons model; adaptive spiking neural networks; biologically-plausible spiking neural networks; catastrophic forgetting; coupled ordinary differential equations; frequency 2.4 GHz; homeostasis; human brain; learning algorithm; mammalian vision; memory size 200 MByte; neurogenesis; neurons death; saving memory; spike timing dependent plasticity; spiking code; synaptic information; synaptogenesis; time 0.1 s; time 5.2 hour; Adaptation models; Adaptive systems; Biological information theory; Biological neural networks; Biological system modeling; Memory management; Neurons;