精细神经网络仿真方法研究进展
A review of detailed network simulation methods
- 2023年28卷第2期 页码:358-371
纸质出版日期: 2023-02-16 ,
录用日期: 2022-07-20
DOI: 10.11834/jig.220266
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2023-02-16 ,
录用日期: 2022-07-20
移动端阅览
张祎晨, 黄铁军. 精细神经网络仿真方法研究进展[J]. 中国图象图形学报, 2023,28(2):358-371.
Yichen Zhang, Tiejun Huang. A review of detailed network simulation methods[J]. Journal of Image and Graphics, 2023,28(2):358-371.
树突对大脑神经元实现不同的信息处理功能有着重要作用。精细神经元模型是一种对神经元树突以及离子通道的信息处理过程进行精细建模的模型,可以帮助科学家在实验条件的限制之外探索树突信息处理的特性。由精细神经元组成的精细神经网络模型可通过仿真对大脑的信息处理过程进行模拟,对于理解树突的信息处理机制、大脑神经网络功能背后的计算机理具有重要作用。然而,精细神经网络仿真需要进行大量计算,如何对精细神经网络进行高效仿真是一个具有挑战的研究问题。本文对精细神经网络仿真方法进行梳理,介绍了现有主流仿真平台与核心仿真算法,以及可进一步提升仿真效率的高效仿真方法。将具有代表性的高效仿真方法按照发展历程以及核心思路分为网络尺度并行方法、神经元尺度并行方法以及基于GPU(graphics processing unit)的并行仿真方法3类。对各类方法的核心思路进行总结,并对各类方法中代表性工作的细节进行分析介绍。随后对各类方法所具有的优劣势进行分析对比,对一些经典方法进行总结。最后根据高效仿真方法的发展趋势,对未来研究工作进行展望。
Neurons in brain have complicated morphologies. Those tree-like components are called dendrites. Dendrites receive spikes from connected neurons and integrate all signals-received. Many experiments show that dendrites contain multiple types of ion channels
which can induce high nonlinearity in signal integration. The high nonlinearity makes dendrites become fundamental units in neuronal signal processing. So
understanding the mechanisms and function of dendrites in neurons and neural circuits becomes one core question in neuroscience. However
because of the highly complicated biophysical properties and limited experimental techniques
it's hard to get further insights about dendritic mechanisms and functions in neural circuits. Biophysically detailed multi-compartmental models are typical models for modelling all biophysical details of neurons
including 1) dynamics of dendrites
2) ion channels
and 3) synapses. Detailed neuron models can be used to simulate the signal integration. Detailed network models can simulate biophysical mechanisms and network functions both
helping scientists explore the mechanisms behind different phenomena. However
detailed multi-compartmental neuron models has high computational complexity in simulation. When we simulate detailed networks
the computational complexity highly burdens current simulators. How to accelerate the simulation of detailed neural networks has been a challenging research topic for both neuroscience and computer science community. During last decades
lots of works try to use parallel computing techniques to achieve higher simulation efficiency. In this study
we review these high performance methods for detailed network simulation. First
we introduce typical detailed neuron simulators and their kernel simulation methods. Then we review those parallel methods that are used to accelerate detailed simulation. We classify these methods into three categories: 1) network-level parallel methods; 2) cellular-level parallel methods; and 3) GPU(graphics processing unit)-based parallel methods. Network-level parallel methods parallelize the computation of different neurons in network simulation. The computation inside each neuron is independent from other neurons
so different neurons can be parallelized. Before simulation
network-level methods assign the whole network to multiple processes or threads
and each process or thread simulate a group of neurons. With network-level parallel methods
scientists can use modern multi-core CPUs or supercomputers to simulate detailed network models. Cellular-level parallel methods further parallelize the computation inside each neuron. Before simulation
cell-level parallel methods first split each neuron into several subblocks. The computation of all subblocks is parallelized. With cellular-level parallel methods
scientists can make full use of the parallel capability of supercomputers to further boost simulation efficiency. In recent studies
more works start to use GPU in detailed-network simulation. The strong parallel power of GPU enables efficient simulation of detailed networks
and makes GPU-based parallel methods more efficient than CPU-based parallel methods. GPU-based parallel methods can also be categorized into network-level and cellular-level methods. GPU-based network-level methods compute each neuron with one GPU thread
while GPU-based cellular-level methods compute single neuron with multiple GPU threads. In summary
we review and analyze recent detailed network simulation methods and classify all methods into three categories as mentioned above. We further summarize the strength and weakness of these methods
and propose our opinion about future works on detailed network simulation.
神经形态计算大脑仿真树突计算精细神经元模型精细神经网络仿真
neuromorphic computingbrain simulationdendritic computingdetailed neuron modeldetailed networksimulation
Akar N A, Cumming B,Karakasis V, Küsters A, Klijn W, Peyser A and Yates S. 2019. Arbor — A morphologically-detailed neural network simulation library for contemporary high-performance computing architectures//Proceedings of the 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). Pavia, Italy: IEEE: 274-282 [DOI: 10.1109/EMPDP.2019.8671560http://dx.doi.org/10.1109/EMPDP.2019.8671560]
Beniaguev D, Segev I and London M. 2021. Single cortical neurons as deep artificial neural networks. Neuron, 109(17): 2727-2739 [DOI: 10.1016/j.neuron.2021.07.002]
Ben-Shalom R, Ladd A, Artherya N S, Cross C, Kim K G, Sanghevi H, Korngreen A, Bouchard K E and Bender K J. 2022. NeuroGPU: accelerating multi-compartment, biophysically detailed neuron simulations on GPUs. Journal of Neuroscience Methods, 366: #109400 [DOI: 10.1016/j.jneumeth.2021.109400]
Ben-Shalom R, Liberman G and Korngreen A. 2013. Accelerating compartmental modeling on a graphical processing unit. Frontiers in Neuroinformatics, 7: #4 [DOI:10.3389/fninf.2013.00004]
Bicknell B A and Häusser M. 2021. A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron, 109(24): 4001-4017 [DOI: 10.1016/j.neuron.2021.09.044]
Billeh Y N, Cai B H, Gratiy S L, Dai K, Iyer R, Gouwens N W, Abbasi-Asl R, Jia X X, Siegle J H, Olsen S R, Koch C, Mihalas S and Arkhipov A. 2020. Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex. Neuron, 106(3): 388-403 [DOI: 10.1016/j.neuron.2020.01.040]
Bono J and Clopath C. 2017. Modeling somatic and dendritic spike mediated plasticity at the single neuron and network level. Nature Communications, 8: #706 [DOI: 10.1038/s41467-017-00740-z]
Bower J M and Beeman D. 1998. Neural Modeling with GENESIS//Bower J M and Beeman D, eds. The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System. New York, USA: Springer: 17-27 [DOI: 10.1007/978-1-4612-1634-6_3http://dx.doi.org/10.1007/978-1-4612-1634-6_3]
Chavlis S and Poirazi P. 2021. Drawing inspiration from biological dendrites to empower artificial neural networks. Current Opinion in Neurobiology, 70: 1-10 [DOI: 10.1016/j.conb.2021.04.007]
Eichner H, Klug T and Borst A. 2009. Neural simulations on multi-core architectures. Frontiers in Neuroinformatics, 3: #21 [DOI: 10.3389/neuro.11.021.2009]
Einevoll G T, Destexhe A, Diesmann M, Grün S, Jirsa V, de Kamps M, Migliore M, Ness T V, Plesser H E and Schürmann F. 2019. The scientific case for brain simulations. Neuron, 102(4): 735-744 [DOI: 10.1016/j.neuron.2019.03.027]
Gidon A, Zolnik T A, Fidzinski P, Bolduan F, Papoutsi A, Poirazi P, Holtkamp M, Vida I and Larkum M E. 2020. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science, 367(6473): 83-87 [DOI: 10.1126/science.aax6239]
Goddard N H and Hood G. 1998. Large-scale simulation using parallel GENESIS//Bower J M and Beeman D, eds. The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System. New York, USA: Springer: 349-379 [DOI: 10.1007/978-1-4612-1634-6_21http://dx.doi.org/10.1007/978-1-4612-1634-6_21]
Hassabis D, Kumaran D, Summerfield C and Botvinick M. 2017. Neuroscience-inspired artificial intelligence. Neuron, 95(2): 245-258 [DOI: 10.1016/j.neuron.2017.06.011]
Häusser M and Mel B. 2003. Dendrites: bug or feature? Current Opinion in Neurobiology, 13(3): 372-383 [DOI: 10.1016/S0959-4388(03)00075-8]
Hines M. 1984. Efficient computation of branched nerve equations. International Journal of Bio-Medical Computing, 15(1): 69-76 [DOI: 10.1016/0020-7101(84)90008-4]
Hines M, Kumar S and Schürmann F. 2011. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer. Frontiers in Computational Neuroscience, 5: #49 [DOI: 10.3389/fncom.2011.00049]
Hines M L and Carnevale N T. 1997. The NEURON simulation environment. Neural Computation, 9(6): 1179-1209 [DOI: 10.1162/neco.1997.9.6.1179]
Hines M L, Eichner H and Schürmann F. 2008a. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors. Journal of Computational Neuroscience, 25(1): 203-210 [DOI: 10.1007/s10827-007-0073-3]
Hines M L, Markram H and Schürmann F. 2008b. Fully implicit parallel simulation of single neurons. Journal of Computational Neuroscience, 25(3): 439-448 [DOI: 10.1007/s10827-008-0087-5]
Hjorth J J J, Kozlov A, Carannante I, Frost Nylén J, Lindroos R, Johansson Y, Tokarska A, Dorst M C, Suryanarayana S M, Silberberg G, Hellgren Kotaleski J and Grillner S. 2020. The microcircuits of striatum in silico. Proceedings of the National Academy of Sciences of the United States of America, 117(17): 9554-9565 [DOI: 10.1073/pnas.2000671117]
Hodgkin A L and Huxley A F. 1952. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4): 500-544 [DOI: 10.1113/jphysiol.1952.sp004764]
Huang T J, Shi L P, Tang H J, Pan G, Chen Y J and Yu J Q. 2016. Research on multimedia technology 2015—advances and trend of brain-like computing. Journal of Image and Graphics, 21(11): 1411-1424
黄铁军, 施路平, 唐华锦, 潘纲, 陈云霁, 于俊清. 2016. 多媒体技术研究: 2015——类脑计算的研究进展与发展趋势. 中国图象图形学报, 21(11): 1411-1424 [DOI: 10.11834/jig.20161101]
Huber F. 2018. Efficient tree solver for hines matrices on the GPU [EB/OL]. [2022-03-22].https://arxiv.org/pdf/1810.12742.pdfhttps://arxiv.org/pdf/1810.12742.pdf
Kozloski J and Wagner J. 2011. An ultrascalable solution to large-scale neural tissue simulation. Frontiers in Neuroinformatics, 5: #15 [DOI: 10.3389/fninf.2011.00015]
Kumbhar P, Hines M, Fouriaux J, Ovcharenko A, King J, Delalondre F and Schürmann F. 2019. CoreNEURON: an optimized compute engine for the NEURON simulator. Frontiers in Neuroinformatics, 13: #63 [DOI: 10.3389/fninf.2019.00063]
Kumbhar P, Hines M, Ovcharenko A, Mallon D A, King J, Sainz F, Schürmann F and Delalondre F. 2016. Leveraging a cluster-booster architecture for brain-scale simulations//Proceedings of the 31st International Conference on High Performance Computing. Frankfurt, Germany: Springer International Publishing: 363-380 [DOI: 10.1007/978-3-319-41321-1_19http://dx.doi.org/10.1007/978-3-319-41321-1_19]
Larkum M E, Zhu J J and Sakmann B. 1999. A new cellular mechanism for coupling inputs arriving at different cortical layers. Nature, 398(6725): 338-341 [DOI: 10.1038/18686]
Lillicrap T P, Santoro A, Marris L, Akerman C J and Hinton G. 2020. Backpropagation and the brain. Nature Reviews Neuroscience, 21(6): 335-346 [DOI: 10.1038/s41583-020-0277-3]
Lytton W W, Seidenstein A H, Dura-Bernal S, McDougal R A, Schürmann F and Hines M L. 2016. Simulation neurotechnologies for advancing brain research: parallelizing large networks in NEURON. Neural Computation, 28(10): 2063-2090 [DOI: 10.1162/NECO_a_00876]
Lytton W W, Stewart M and Hines M. 2008. Simulation of large networks: technique and progress//Soltesz I and Staley K, eds. Computational Neuroscience in Epilepsy. San Diego, USA: Academic Press: 3-17 [DOI: 10.1016/B978-012373649-9.50004-1http://dx.doi.org/10.1016/B978-012373649-9.50004-1]
Markram H, Muller E, Ramaswamy S, Reimann M W, Abdellah M, Sanchez C A, Ailamaki A, Alonso-Nanclares L, Antille N, Arsever S, Kahou G A A, Berger T K, Bilgili A, Buncic N, Chalimourda A, Chindemi G, Courcol J D, Delalondre F, Delattre V, Druckmann S, Dumusc R, Dynes J, Eilemann S, Gal E, Gevaert M E, Ghobril J P, Gidon A, Graham J W, Gupta A, Haenel V, Hay E, Heinis T, Hernando J B, Hines M, Kanari L, Keller D, Kenyon J, Khazen G, Kim Y, King J G, Kisvarday Z, Kumbhar P, Lasserre S, Le Bé J V, Magalhães B R C, Merchán-Pérez A, Meystre J, Morrice B R, Muller J, Muñoz-Céspedes A, Muralidhar S, Muthurasa K, Nachbaur D, Newton T H, Nolte M, Ovcharenko A, Palacios J, Pastor L, Perin R, Ranjan R, Riachi I, Rodríguez J R, Riquelme J L, Rössert C, Sfyrakis K, Shi Y, Shillcock J C, Silberberg G, Silva R, Tauheed F, Telefont M, Toledo-Rodriguez M, Tränkler T, Van Geit W, Díaz J V, Walker R, Wang Y, Zaninetta S M, DeFelipe J, Hill S L, Segev I and Schürmann F. 2015. Reconstruction and simulation of neocortical microcircuitry. Cell, 163(2): 456-492 [DOI: 10.1016/j.cell.2015.09.029]
Mascagni M. 1991. A parallelizing algorithm for computing solutions to arbitrarily branched cable neuron models. Journal of Neuroscience Methods, 36(1): 105-114 [DOI: 10.1016/0165-0270(91)90143-N]
Masoli S, Solinas S and D'Angelo E. 2015. Action potential processing in a detailed Purkinje cell model reveals a critical role for axonal compartmentalization. Frontiers in Cellular Neuroscience, 9: #47 [DOI: 10.3389/fncel.2015.00047]
McDougal R A, Morse T M, Carnevale T, Marenco L, Wang R X, Migliore M, Miller P L, Shepherd G M and Hines M L. 2017. Twenty years of ModelDB and beyond: building essential modeling tools for the future of neuroscience. Journal of Computational Neuroscience, 42(1): 1-10 [DOI: 10.1007/s10827-016-0623-7]
Migliore M, Cannia C, Lytton W W, Markram H and Hines M L. 2006. Parallel network simulations with NEURON. Journal of Computational Neuroscience, 21(2): 119-129 [DOI: 10.1007/s10827-006-7949-5]
Migliore M, Cavarretta F, Marasco A, Tulumello E, Hines M L and Shepherd G M. 2015. Synaptic clusters function as odor operators in the olfactory bulb. Proceedingsof the National Academy of Sciences of the United States of America, 112(27): 8499-8504 [DOI: 10.1073/pnas.1502513112]
Migliore M, Messineo L and Ferrante M. 2004. DendriticIhselectively blocks temporal summation of unsynchronized distal inputs in CA1 pyramidal neurons. Journal of Computational Neuroscience, 16(1): 5-13 [DOI: 10.1023/B:JCNS.0000004837.81595.b0]
Moldwin T, Kalmenson M and Segev I. 2021. The gradient clusteron: a model neuron that learns to solve classification tasks via dendritic nonlinearities, structural plasticity, and gradient descent. PLoS Computational Biology, 17(5): #1009015 [DOI: 10.1371/journal.pcbi.1009015]
Moldwin T and Segev I. 2020. Perceptron learning and classification in a modeled cortical pyramidal cell. Frontiers in Computational Neuroscience, 14: #33 [DOI: 10.3389/fncom.2020.00033]
NVIDIA. 2021. CUDA C++ Programming Guide[DB/OL]. [2022-03-22].https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#axzz4qYtE8tDghttps://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#axzz4qYtE8tDg
Payeur A, Béïque J C and Naud R. 2019. Classes of dendritic information processing. Current Opinion in Neurobiology, 58: 78-85 [DOI: 10.1016/j.conb.2019.07.006]
Payeur A, Guerguiev J, Zenke F, Richards B A and Naud R. 2021. Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits. Nature Neuroscience, 24(7): 1010-1019 [DOI: 10.1038/s41593-021-00857-x]
Poirazi P, Brannon T and Mel B W. 2003. Pyramidal neuron as two-layer neural network. Neuron, 37(6): 989-999 [DOI: 10.1016/S0896-6273(03)00149-1]
Poirazi P and Mel B W. 2001. Impact of active dendrites and structural plasticity on the memory capacity of neural tissue. Neuron, 29(3): 779-796 [DOI: 10.1016/S0896-6273(01)00252-5]
Poirazi P and Papoutsi A. 2020. Illuminating dendritic function with computational models. Nature Reviews Neuroscience, 21(6): 303-321 [DOI: 10.1038/s41583-020-0301-7]
Rall W. 1959. Branching dendritic trees and motoneuron membrane resistivity. Experimental Neurology, 1(5): 491-527 [DOI: 10.1016/0014-4886(59)90046-9]
Rall W. 1962. Theory of physiological properties of dendrites. Annals of the New York Academy of Sciences, 96(4): 1071-1092 [DOI: 10.1111/j.1749-6632.1962.tb54120.x]
Rempe M J and Chopp D L. 2006. A predictor-corrector algorithm for reaction-diffusion equations associated with neural activity on branched structures. SIAM Journal on Scientific Computing, 28(6): 2139-2161 [DOI: 10.1137/050643210]
Sacramento J, Costa R P, Bengio Y and Senn W. 2018. Dendritic cortical microcircuits approximate the backpropagation algorithm//Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal, Canada: Curran Associates Inc. : 8735-8746
Schiller J, Major G, Koester H J and Schiller Y. 2000. NMDA spikes in basal dendrites of cortical pyramidal neurons. Nature, 404(6775): 285-289 [DOI: 10.1038/35005094]
Schiller J, Schiller Y, Stuart G and Sakmann B. 1997. Calcium action potentials restricted to distal apical dendrites of rat neocortical pyramidal neurons. The Journal of Physiology, 505(3): 605-616 [DOI: 10.1111/j.1469-7793.1997.605ba.x]
Segev I and Rall W. 1988. Computational study of an excitable dendritic spine. Journal of Neurophysiology, 60(2): 499-523 [DOI: 10.1152/JN.1988.60.2.499]
Stone H S. 1973. An efficient parallel algorithm for the solution of a tridiagonal linear system of equations. Journal of the ACM, 20(1): 27-38 [DOI: 10.1145/321738.321741]
Stuart G J and Sakmann B. 1994. Active propagation of somatic action potentials into neocortical pyramidal cell dendrites. Nature, 367(6458): 69-72 [DOI: 10.1038/367069a0]
Tikidji-Hamburyan R A, Narayana V, Bozkus Z and El-Ghazawi T A. 2017. Software for brain network simulations: a comparative study. Frontiers in Neuroinformatics, 11: #46 [DOI: 10.3389/fninf.2017.00046]
Tsuyuki T, Yamamoto Y and Yamazaki T. 2016. Efficient numerical simulation of neuron models with spatial structure on graphics processing units//Proceedings of the 23rd International Conference on Neural Information Processing. Kyoto, Japan: Springer: 279-285 [DOI: 10.1007/978-3-319-46681-1_34http://dx.doi.org/10.1007/978-3-319-46681-1_34]
Urbanczik R and Senn W. 2014. Learning by the dendritic prediction of somatic spiking. Neuron, 81(3): 521-528 [DOI: 10.1016/j.neuron.2013.11.030]
Valero-Lara P, Martínez-Perez I, Peña A J, Martorell X, Sirvent R and Labarta J. 2017. cuHinesBatch: solving multiple hines systems on gpus human brain project. Procedia Computer Science, 108: 566-575 [DOI: 10.1016/J.PROCS.2017.05.145]
Valero-Lara P, Sirvent R, Peña A J and Labarta J. 2019. MPI+OpenMP tasking scalability for multi-morphology simulations of the human brain. Parallel Computing, 84: 50-61 [DOI: 10.1016/j.parco.2019.03.006]
Vooturi D T, Kothapalli K and Bhalla U S. 2017. Parallelizing hines matrix solver in neuron simulations on GPU//Proceedings of the 24th IEEE International Conference on High Performance Computing (HiPC). Jaipur, India: IEEE: 388-397 [DOI: 10.1109/HiPC.2017.00051http://dx.doi.org/10.1109/HiPC.2017.00051]
Zeng Y, Liu C L and Tan T N. 2016. Retrospect and outlook of brain-inspired intelligence research. Chinese Journal of Computers, 39(1): 212-222
曾毅, 刘成林, 谭铁牛. 2016. 类脑智能研究的回顾与展望. 计算机学报, 39(1): 212-222 [DOI: 10.11897/SP.J.1016.2016.00212]
相关作者
相关机构