Current Issue Cover

肖照林1,2, 杨志林1, 刘欢1, 金海燕1,2(1.西安理工大学计算机科学与工程学院, 西安 710048;2.陕西省网络计算与安全技术重点实验室, 西安 710048)

摘 要
目的 旷场实验(open field test,OFT)是行为学与药理实验分析中常用的实验方法。为了对比测试组和参考组被测小鼠的行为特征差异,通常需要耗费大量精力对旷场实验数据进行处理和观测。由于旷场实验数据量大且较依赖观测人员的主观判断,导致对小鼠行为差异观测的精度较低且缺乏量化评价指标。为此,本文提出一种基于卷积神经网络(convolutional neural networks,CNN)的旷场实验视频分类方法,可基于量化特征对两组小鼠的行为差异自动分类。方法 从视频空域和时域中提取22维的小鼠运动行为特征,经过量化后生成特征矩阵,进而以矩阵拼接方式构造可学习的行为特征矩阵样本,利用不同结构卷积神经网络对提取的行为特征矩阵样本进行训练和分类,并分析网络结构对分类结果的影响,在实现两组小鼠分类的基础上,对不同维度小鼠行为特征对分类精度的重要性进行评价。结果 在真实旷场实验数据集上的实验分析表明,本文算法的分类准确率为99.25%。此外,由实验结果分析发现小鼠的大角度转向频次、停留区域与时间对小鼠分类的重要性高于其他维度特征。结论 提出的特征拼接矩阵学习方法能够准确识别两组小鼠旷场实验视频的差异,本文方法的分类准确率明显优于现有人工分析及经典机器学习方法。
A spliced feature matrices learning method for open field test video classification

Xiao Zhaolin1,2, Yang Zhilin1, Liu Huan1, Jin Haiyan1,2(1.Department of Computer Science, Xi'an University of Technology, Xi'an 710048, China;2.Shaanxi Key Laboratory for Network Computing and Security Technology, Xi'an 710048, China)

Objective open field test (OFT) video classification is a wide spread method to assess locomotors activity and mice habits control in pharmacological experiments. One major goal of traditional OFT is to sort out distinguishable features between a testing group and a reference group. Researchers will analyze those video clips to find out animal behaviors differences for the two groups of mice based on digital OFT videos records. The manual inspection of OFT data is time consumed and high cost involved. A detailed analysis process relies heavily on professional experiences. Our research illustrates a learning-based classification through an OFT-videos-based quantitative feature extraction. We show that a spliced feature matrix learning method has its priority than current independent features. Hence, a convolutional neural network based solution sort out the oriented video clips of two groups of mice. Our analyzed results can be summarized as following:1) a novel spliced feature matrix of OFT video is demonstrated based on 22-dimension quantitative behavioral features. 2) To analyze the influence of network structure on the classification, we design and test 8 different convolutional neural networks in the experiments. Method Our research focuses on the varied characteristics between the two groups of mice macroscopic behavior. In the context of the spatial and temporal domain of OFT video data, 22 distinct types of features are extracted including average crawling speed and distance, the positional distribution of staying, resting time, turning frequencies and so on. These quantitative vector-based descriptions of the OFT video can be well distinguished based on traditional machine learning classifiers, such as K-means clustering, boosting classification, support vector machine (SVM), or convolutional neural network (CNN). Some quantitative feature vectors are proposed to separate the testing group from the reference group. Another critical factor is the mice crawling path, which cannot be vector-oriented. A novel regularization and fusion are employed to illustrate both the quantitative features and non-quantitative crawling path. By constructing the targeted self-correlation matrix of weighted feature vectors, a 484×484 quantitative feature matrix is spliced with a 484×484 crawling path image, and it leads a spliced feature matrix representation with a 2D dimension of 484×968. We introduce a CNN-based classification of spliced feature matrices from different network structures. Based on 4-layers CNN to 10-layers CNN, the quantitative feature matrix or the spliced feature matrices are learned via the basic network structures, which are we evaluate the impacts of different network structures and feature dimensions on the precision of OFT video classification further. In the experiments, we test the demonstrated feature extraction and classification on the real OFT dataset. The dataset is composed of OFT video of 32 mice, including 17 mice in the test group and 15 mice in the reference group. The testing animal group is injected with a kind of anti-depressants, and the reference group injected with only placebo. Each mouse is eye-captured independently using a video camera for a 24 hours timescale detection in the OFT experiment. These video data is cropped into targeted short duration video clips, each of which is 10 minutes long. We construct a dataset based on 3 080 testing samples and 1 034 reference samples. In the training analysis, the training dataset is based on 3 000 samples and the testing dataset is based on 1 114 samples. Result Our demonstration indicates that the proposed algorithm is optimized to manual classification and supported-vector-machine solution. In respect of the experiment datasets, the classification precision of the proposed algorithm is up to 99.25%. The accurate classification of mouse OFT video can be achieved via a simplified network structure, such as a 9-layers CNN. The varied feature dimension to classification accuracy is not balanced. In terms of the ablation results among different quantitative features, the large angle turning, staying time and positional distribution are critical for identifying its grouping than other quantitative feature dimensions. Non-quantitative crawling path image has obvious contribution to classification, which improves 2%-3% precision in classification. The demonstration also verifies can achieve good classification via a sample network structure. Conclusion Overall, we propose a CNN-based solution to classify the testing group OFT videos from the reference group of OFT videos via the spliced feature matrix learning. Both the quantitative features and qualitative crawling path are melted into a novel regularization and fusion. The identified classification has optimized on manual classification and traditional SVM methods. Our classification precision can reach 99.25% on the experimental dataset. It demonstrates a great potential for the OFT analysis. The further research work can verify the relationship between distinguishable features and detailed mouse behavior. The learning network optimization can be further called to enhance the generalization of the OFT classification.