Current Issue Cover

李雅馨1,2, 申江荣3, 徐齐1,2(1.大连理工大学人工智能学院, 大连 116024;2.人工智能与数字经济广东省实验室, 广州 510335;3.浙江大学计算机科学与技术学院, 杭州 310058)

摘 要
A summary of image recognition-relevant multi-layer spiking neural networks learning algorithms

Li Yaxin1,2, Shen Jiangrong3, Xu Qi1,2(1.School of Artificial Intelligence, Dalian University of Technology, Dalian 116024, China;2.Guangdong Artificial Intelligence and Digital Economy Laboratory, Guangzhou 510335, China;3.College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, China)

To understand the structure of human brain further, Wolfgang Mass summarizes that the structures, training methods and some other crucial parts of spiking neural networks (SNNs) systematically, which are known as the third-generation of artificial neural networks. There are hundreds of millions of neurons and synaptic structures in related to human brain, but the requirement of energy is quite small. The SNNs has its advantages of biological interpretability and lower power consumption in comparison with the first and the second generation artificial neural networks (ANNs). Its neurons simulate the internal dynamics of biological neurons, and the weight-balanced simulates the construction, enhancement and inhibition rules of biological synapses. The SNN is mainly composed of such commonly-used spiking neuron models in relevant to Hodgkin Huxley (HH) model, leaky integrate-and-fire (LIF) model, and spiking response (SRM) model. The difference of ion concentration-inner and the biological neuron-outer can activate the potential of the cell membrane. To improve an action potential, channel-based ions move in and out of the neuron membrane in the neuron membrane when a neuron is stimulated. Spiking neuron model is a mathematical model that simulates the action potential process of biological neuron. A spiking neuron receives neurons-derived spiking stimulation in the upper layer. It will fire spikes to outreach a spiking train. The SNNs is focused on the transmission from spiking trains to information-targeted, which can simulate the propagation of biological signals in the biological neural network. The spiking trains can convey spatiotemporal information. However, current performance of SNNs-based pattern recognition tasks is still challenged to its immature deep learning methods. The artificial neurons of the neural network are based on the output in the form of real numbers, which makes it possible to use the global back-propagation algorithm to train the parameters of the deep neural network. But the spiking train is a sort of binary discrete output, which is still a challenging issue for SNN-based training. First, to clarify its current situation, our summary is focused on recent SNN-based learning algorithms. Then, to analyze pros and cons of popular works, the three main algorithms are introduced:1) supervised learning, 2) unsupervised learning, and 3) ANN-SNN conversion. The unsupervised learning algorithm is mainly based on the mechanism of spike timing dependent plasticity (STDP). The biological synapses-interconnected is enhanced or inhibited according to the relative timing of the firing of presynaptic neurons and postsynaptic neurons. Unsupervised learning methods have stronger biological interpretability, which can use the local optimization method to balance the synaptic weights, but this method is challenged for its complicated and large-scale network structures. Therefore, drawing on the advantages of ANN's easy calculation, some supervised algorithms have emerged like gradient-based training method and ANN2SNN method. The gradient-based learning algorithm is mainly concerned of the training idea of back-propagation (BP), which can balance the weight in terms of the error between the output value and the target value. This challenge is to be resolved in accordance with non-differentiable nature of discrete spikes. More methods of the BP-error have been proposed like gradient surrogate method. This gradient-based training method is focused on leveraging the training advantages of ANN and SNN. The training of SNN is interpretable biologically and easy to be computed. The ANN2SNN method can be used to convert the ANN weights-trained to SNN. This method can be used to realize the continuous activation values in the ANN into spiking trains. To reduce the conversion loss of ANN and SNN, this method is fine-tuned and converted according to neuron dynamics. This training method feature is indirect that it can apply SNN to complex network structures. The method of weight transfer can avoid direct training of SNN, which can apply SNN to complicated network structures. The ANN has been widely used in the field of image recognition. To extract more image features, ANN can be mainly used for consistent functions. SNN has its features of interpretability-biological and power consumption-lower, which can show its high performance in image recognition tasks. Finally, future SNNs-bionic learning methods are predicted in terms of some popular domain methods.