|
发布时间: 2019-01-16 |
医学图像处理 |
|
|
收稿日期: 2018-05-16; 修回日期: 2018-07-24
基金项目: 中央高校基本科研业务费专项基金项目(N171903002);辽宁省教育厅重点实验室基金项目(LZ2014015)
第一作者简介:
刘通, 1993年生, 男, 硕士研究生, 主要研究方向为医疗影像处理。E-mail:771744437@qq.com;
朱宏博, 男, 博士, 主要研究方向为体域网、医疗影像处理。E-mail:125226032@qq.com; 孟昭岩, 男, 硕士, 主要研究方向为医疗影像处理。E-mail:929326598@qq.com; 窦圣昶, 男, 博士, 主要研究方向为医疗影像处理、复杂网络。E-mail:644630338@qq.com.
中图法分类号: TP301.6
文献标识码: A
文章编号: 1006-8961(2019)01-0124-11
|
摘要
目的 基于球谐函数与容斥映射算法向量化球面表面纹理与结节形状用以进行胸部CT图像肺结节良恶性判定。区别于基于深度学习解决肺结节良恶性筛查的方法,目前方法多集中于框架改进而忽略了数据预处理,文中所提方法旨在对球面纹理与结节形状进行向量表达,使其可以输入深度森林进行特征分类训练。方法 首先采用辽宁中医药大学附属医院数据,通过3维重构获得3维肺结节图像。其次使用球谐函数与容斥映射算法在保留空间信息的同时将纹理以网格方式映射到标准球面上。再次使用网格-LBP与映射形变能量分别完成对球面纹理与结节形状信息的构建。最后提出一种基于网格的多粒度扫描方法对深度森林训练框架进行改进,并将向量化后的纹理和形状特征加入到改进的深度森林训练框架中进行实验验证。结果 通过大量的实验结果验证,在准确率(ACC)、特异度(SPE)、敏感度(SEN)和受试者工作特征曲线下的面积(AUC)4个衡量指标下,本文方法具有优于现存先进方法的表现,其中ACC、SPE、SEN和AUC分别达到76.06%、69.46%、88.46%和0.84。结论 基于球谐函数与容斥映射算法可成功地对肺结节表面和形状两个特征进行向量化并训练,不仅考虑了数据预处理,而且通过两个特征对肺结节良恶性检测的准确率要高于传统1个特征检测的结果,同时也为3维模型中特征的提取及向量化提供了一个有效的方法。
关键词
球谐函数; 容斥映射算法; 向量化表达; 网格-LBP; 形变能量
Abstract
Objective In this paper, the spherical surface texture and nodule shape are vectorized through spherical harmonics and repulsive mapping algorithm for the benign and malignant determination of pulmonary nodule in chest CT (computed tomography) images. The current methods of deep learning during benign and malignant screening of pulmonary nodules neglect data-preprocessing while focusing more on framework improvement. So far, the depth-learning method is mainly oriented to feature information, which can be vectored. In image-processing, two kinds of targets are mainly included in two-and three-dimensional processing. In two-dimensional processing, the existence of input data must be an equal-length problem. Considering that the obtained size of the pulmonary nodules is different, we must compress large-scale images in the input process and stretch the small-scale images, which will undoubtedly affect the quality of feature extraction and the final classification results. In the classification of CT nodules for the three-dimensional treatment of pulmonary nodules, the CT image angle is more stringent due to the different sizes of pulmonary nodules and the uncertainty of growth position, and the angle factors are uncontrollable during actual CT shooting. Hence, if we insist on the characteristics of the convolution channel, it is necessary to give priority to the solution. So we have to standardize different pulmonary nodules. Different from traditional pulmonary nodule classification method, the proposed method aims on how to vectorize the spherical texture and nodule shape to allow input of data to the depth forest for feature classification training. Method First, the three-dimensional pulmonary nodule images are produced by three-dimensional reconstruction of the data from the Affiliated Hospital of Liaoning University of Traditional Chinese Medicine. The data is divided into training and test set at 8:2 ratio. Second, the spherical harmonic function and the repulsive mapping algorithm are used to map the texture to the standard sphere in a mesh manner while preserving the spatial information. Third, the texture features of the pulmonary nodules are calculated by the method of mesh-LBP (local binary pattern) and vectorized by the construction ring. Then, the shape energy loss is constructed by the distance difference between the center of gravity and the central point of the pulmonary nodule during reconstruction of the three-dimensional pulmonary nodules, and the shape energy loss of the lung nodules is extracted and vectorized during rebuilding. Finally, a mesh-based multi-grain scanning method is proposed to improve the deep forest training framework by constructing multi-scale concentric ordered rings instead of traditional multi-grain scanning methods. The vectorized texture and shape features are added to the improved deep forest training framework for experimental verification. Result The pulmonary nodule data of this trial was mainly provided by the IRB (institutional review committee) of the Affiliated Hospital of Liaoning University of Traditional Chinese Medicine. The approval notice number is 2017111CS(KT)-040-01. According to the growth of pulmonary nodules, the dataset was divided into four types:independent standing lung nodules, and ground glass-shaped, vascular adhesion, and thoracic adhesion type pulmonary nodules. The benign and malignant lung nodules divided into five levels:1-benign, 2-suspected benign, 3-unknown, 4-suspected malignant, and 5-malignant. A total of 7 326 three-dimensional lung nodules were extracted as the data set for this experiment. According to several experimental results, this algorithm have a better performance than the existing advanced methods (Variational Auto-Encoder, Faster Region Convolutional neural network, GcForest deep forest, Deep Convolutional Generative Adversarial Networks, and Extreme learning machine) under the following indices:accuracy (ACC), specificity (SPE), sensitivity (SEN), and AUC (area under the receiver operating characteristic ROC curve) values. In the experiments, ACC, SPE, SEN, and AUC reached 76.06%, 69.46%, 88.46%, and 0.84, respectively. Accuracy indicates the proportion of the classifier's correct classification of positive and negative examples, which can measure the judgment of the classifier on the sample. Specificity refers to the true negative rate, which indicates the proportion of the classifier to the correct classification of the counterexample and can measure the ability of the classifier to identify the counterexample. Sensitivity refers to the true positive rate, which indicates the proportion of correct classification of positive examples and can measure the ability of the classifier to identify positive examples. It also indicates the area under the ROC curve. The ROC curve can dynamically display the classification result. The true positive rate (sensitivity) is the ordinate, whereas the false positive rate (1-specificity) is the abscissa. A value closer to the upper left corner indicates a stronger classification ability of the method. To accurately measure the ROC curve, it can be represented by the AUC value, which is generally between 0.5 and 1. A larger value indicates a stronger classification ability. Comparatively, the method used in this paper is sub optimal for the SEN value but the method used in this article maximizes SPE and ACC values.The AUC value is also sub optimal. Conclusion Lung cancer have one of the fastest growing morbidity and mortality rates worldwide and is a serious threat to people's health and life. The incidence of malignant pulmonary nodules is concealed. Many patients are often in the local late stage or even distant metastasis at the time of diagnosis, thus losing the chance for cure. Therefore, the early detection of malignant pulmonary nodules is crucial for the chance of successful cure. Based on the spherical harmonic function and the tolerance rejection mapping algorithm, the two surface and shape of pulmonary nodules can be successfully vectored and trained. Aside from considering data pretreatment, the accuracy of benign and malignant detection of pulmonary nodules is improved by these two features. Our proposed method can solve the problem of angle normalization by using a spherical space mesh to obtain the nodular information through a spherical space mesh. In addition, this method can also obtain different texture features and shape edge feature vectors as classification features by using the loop method. It is also an effective method for feature extraction and is vectorized in three-dimensional models.
Key words
spherical harmonic function; repulsive mapping algorithm; vectorized expression; mesh-LBP; deformation energy
0 引言
随着信息技术的不断发展,通过医学影像信息由计算机辅助诊疗的相关技术也层出不穷。在医疗大数据的背景下,对于肺结节影像的获取也愈来愈容易,如何在海量的肺结节影像中提取有效的特征信息对于肺结节良恶性准确的诊断至关重要。深度学习是根据人脑结构建立的一种分层模型,其在处理海量数据方面优势明显,同时对于肺结节良恶性判断的准确性更高。深度学习可在不依靠手工特征的条件下对海量数据进行并行运算,通过网络训练自动获取数据特征并分析处理。目前深度学习方法主要面向可向量化的特征信息数据,其中在图像处理领域主要包括2维与3维两类目标。
对于面向2维图像信息的深度学习框架,按照其架构可细分为CNN(convolutional neural network)-like的分类/回归训练框架,ELM[1] (extreme learning machine)-like训练框架与GAN(generative adversarial nets)-like的生成对抗网络框架。按照特征深度化模式可分为CNN-like的卷积特征分类模型与GcForest深度森林[2]相关的去卷积特征分类模型。CNN-like的分类/回归训练框架通过卷积通道来获得局部特征,其中较为典型的方法包括:区域卷积神经网络(R-CNN)算法[3]、Faster R-CNN算法[4]和YOLO算法[5]等;ELM-like训练框架是一种针对单隐含层前馈神经网络(SLFN)的算法,其能够随机产生输入层与隐含层的连接权值及隐含层神经元的阈值且在训练过程中不需调整,只需要对隐含层神经元数量进行设置就能获得唯一的最优解;GAN是由Goodfellow等人[6]提出的一种根据对抗过程对生成式模型进行预测的新框架,其与变分自动编码(VAE)[7]是像素生成中最具有代表性的方法。在实际应用中为满足多分类需求, GAN框架通常延伸采用深度卷积生成对抗网络模型(DCGAN)[8]进行对抗生成与分类训练,其将原始GAN中的生成模型G和判别模型D替换为深度卷积层,除进行生成图像判断之外还提供生成图像的分类判断。对于VAE而言,最简单的自动编码器网络由3层神经网络构成,第1层是数据输入层,第2层的节点数通常比输入层少且第3层与第1层类似,层与层间互相全连接,由于该网络将输入“编码”变成一个隐藏代码,然后从隐藏表示中“译码”出来。通过简单的测量重构误差和反传网络参数能很好地对该网络进行训练。而VAE与自动编码器的不同点在于其隐藏代码由训练期间学习到的概率分布获得。而GcForest深度森林是一种去卷积运算的图像分类方法,使用随机森林替代卷积运算,通过多粒度扫描增加级联森林的特征,进而进行提取。然而它们具有共同的缺点:输入端要求必须是等长的数据。由于获得的肺结节影像大小不一,因此在输入过程中要对尺度大的影像进行压缩,尺度小的影像进行拉伸,这样无疑会影响特征提取质量, 从而影响最终的分类结果,虽然Lin等人[9]提出了使用特征金字塔网络融合多层特征,根据不同的特征图给出了特征金字塔网络,由于特征金字塔具有尺度不变性,可有效地解决目标检测中的目标物体大小不一致的问题,但增加了预处理难度且只能应用在2维数据信息上,对于3维数据仍然无法实现。
而在面向3维重建影像的深度学习分类方法中,一般根据较为高剂量的放射线探测方式获得较为严谨的2维图像信息并通过双边插值方法获得3维图像信息。对特征提取方法而言,目前方法均在2维框架上加以扩展。其中,Ji等人[10]提出了一种3D-CNN的方法提取视频信息,通过在CNN的池化层和卷积层上添加一个参数作为时间维度的信息,能将2维图像处理的方法向视频处理上扩展。2017年,Wu等人[11]以2维人脸图像及其深度图作为输入端,同时构建两个深度卷积网络模型,将所提取的2维人脸图像及其深度图的高层抽象特征作为神经网络的输入端,并融合2维特征及其深度信息完成3维特征的提取。尽管上述方法在各自场景中均取得了较为突出的结果,但在肺CT中结节图像的分类上,由于肺结节大小不一及其生长位置不确定,对肺CT图像的拍摄角度有着较为严格的要求,而在实际CT拍摄过程中角度因素是不可控的,所以若坚持使用卷积通道特征,需要优先解决海量肺结节角度归一化的问题。由于结节形态各异所以在归一化中仍然存在着归一角度的二义性,因此通过一种球面空间网格构造获得结节表观信息,从而规避角度归一化问题。此外该方法还可以通过取环方法获得不同的球网格比例的纹理与形边特征向量作为分类特征。
根据之前的工作,已将重建后的肺结节特征成功地映射到单位球面上。这部分内容将在基础建模工作中进行简略阐述。而在之后的工作中,本文首先利用网格-LBP的方法计算肺结节表面纹理特征并通过构建环的方式将纹理特征向量化,然后在3维肺结节重建过程中通过重心到肺结节中心点的距离差值构建形状能量损耗,提取肺结节在重建过程中的形状能量损耗并向量化。最后将两个向量化特征输入到改进的深度森林训练框架中,对肺结节良恶性进行训练和测试以验证本文方法。
1 基础建模工作
在之前的工作中,已经使用传统3维参数边界可变形[12]方法,通过两个概率视觉外观模型控制肺结节在分割过程中的变化将其分割出来,然后利用Fang等人[13]提出的算法对其完成三角形3维网格化,且可得到网格节点坐标,然后对所有节点坐标进行球坐标变换,最后利用最小二乘法拟合3维肺结节原模型的坐标得到扩展系数,通过扩展系数对模型重建,重建结果及5类肺结节重建相对误差平均值如图 1、图 2所示。
为了达到图 3中肺结节中心点到每个节点都是单位距离的效果,进行如下运算:设
容斥映射算法中的相容阶段通过如下迭代公式对网格中节点
$ \begin{array}{l} {{P'}_{b, i}} = {P_{b, i}} + {C_{A, 1}}\sum\limits_{j = 1;j \ne i}^J {{\Delta _{b, ij}}d_{b, ij}^2 + } \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{C_{A, 2}}\frac{{{\Delta _{b, ij}}}}{{{d_{b, ij}}}} \end{array} $ | (1) |
随后的排斥阶段将全部的节点向外推动,全部节点都沿着球中心射线的方向向外推动,最终映射到球体上均匀地隔开并且膨胀整个网格。为了避免未移动的节点与已移动的节点间冲突,因此每一个节点
${{P''}_{b + 1, i}} = {{P'}_{b, i}} + \frac{{{C_R}}}{{2M}}\sum\limits_{j = 1;j \ne i}^M {\left( {\frac{{{\Delta _{b, ij}}}}{{{{\left| {{\Delta _{b, ij}}} \right|}^2}}}} \right)} $ | (2) |
容斥映射算法如下:
输入:3维肺结节网格表面;
输出:映射完成之后的单位球。
For
相容过程:
选择任何一个网格节点进行处理;
根据式(1)更新网格节点的位置,并且使其作为相邻所有节点的中心位置。
排斥过程:
根据式(2)更新网格节点位置并避免已发生和未发生移动的节点碰撞。
End
映射结果如图 5所示。
2 基于网格的球面特征表
2.1 纹理信息
在3维网格上构建类似于LBP的模式首先需要一种方案来构建中心环周围的小平面环,并以有序方式遍历它们。令
输入:
输出:
procedure桥接(
While
将
end while
return
End
构建环算法如下:
输入:
输出:有序环
procedure构建环(
For all <
将
将
End for
Return
End
令
$ \begin{array}{l} ML({f_{\rm{c}}}) = \sum\limits_{k = 0}^{11} {s(h\left( {{f_k}} \right) - h({f_{\rm{c}}})) \cdot a\left( k \right)} \\ \;\;\;\;\;\;\;\;\;\;\;s\left( x \right) = \left\{ {\begin{array}{*{20}{c}} 1&{x \ge 0}\\ 0&{x < 0} \end{array}} \right. \end{array} $ | (3) |
式中,
在映射后的单位球面上任取一个三角网格应用桥接算法和构建环算法,并根据周围12个邻域进行计算网格-LBP值并赋给中心三角网格,将单位球面上所有三角网格均进行相同的操作,最终每个网格都由一个新的数字表示,即肺结节的纹理信息。
2.2 形状信息
通过对肺结节进行三角网格化得到
$ \left\{ \begin{array}{l} x_{1, 1}^{\rm{g}} = \frac{{{x_{1, 1}} + {x_{1, 2}} + {x_{1, 3}}}}{3}\\ y_{1, 1}^{\rm{g}} = \frac{{{y_{1, 1}} + {y_{1, 2}} + {y_{1, 3}}}}{3}\\ z_{1, 1}^{\rm{g}} = \frac{{{z_{1, 1}} + {z_{1, 2}} + {z_{1, 3}}}}{3} \end{array} \right. $ | (4) |
可得到全部三角网格的重心坐标。
由重心坐标可求出重心到原点之间的距离
${R_{1,1}} = \sqrt {{{(x_{1,1}^{\rm{g}})}^2} + {{(y_{1,1}^{\rm{g}})}^2} + {{(z_{1,1}^{\rm{g}})}^2}} $ | (5) |
经过30阶重建后可得全部三角网格的重心坐标
$ E = \Delta R = \sum\limits_{i = 1}^f {\left( {{R_{1, i}} - {R_{30, i}}} \right)} $ | (6) |
3 基于网格深度森林的训练框架
深度森林模型[2]是一种去卷积运算的图像分类方法,使用多粒度扫描随机深林替代卷积运算,通过多粒度扫描增加级联森林的特征,级联森林中每一级都是一种决策树的集成方法。
多粒度扫描(multi-grain scanning)[17]是采用滑动窗口扫描最初输入的样本,输入样本的尺度会随滑动窗口尺度的不同而变化。由于得到的样本均是基于曲面上的网格,因此本文提出了一种基于网格的多粒度扫描方法。将上述构建环的算法向多尺度框架扩展,构成多尺度同心有序环。从第1个环中提取与
输入:
输出:
procedure多尺度同心有序环
For
(
将
End for
Return
End
在本文中环数为1、2、3、4、5这5个等级设为滑动窗口尺度。图 8所示是5环移动实例。由图 8可看出, 1环中有12个网格,2环中有36个网格,3、4、5环中分别有72、120、180个网格,并从中心网格点出发每次移动一个网格点。
级联森林(cascade forest)由多级结构组成[18],级联中每级(level)均含有完全随机森林和随机森林两种森林类型。每个完全随机森林中含有1 000棵完全随机树,每棵树随机选取数据特征作为划分节点的父节点。同样,每个随机森林中也含有1 000棵树,但每棵树在划分节点的父节点时,首先通过随机选取
4 实验
4.1 数据集与伦理审查
本文已经利用上述方法将3维肺结节的纹理和形状特征进行向量化建模,现在采用基于网格深度森林的训练框架进行分类。本次实验肺结节数据主要由辽宁中医药大学附属医院的审查委员会(IRB)提供,审批通知编号为:2017111CS(KT)-040-01。数据集中根据肺结节的生长情况分成了独立站位型肺结节、磨玻璃影型肺结节、血管黏连型肺结节和胸腔黏连型肺结节4种(如图 11)。并且又按照肺结节良恶性划分成了5个级别:1——良性,2——疑似良性,3——不明,4——疑似恶性,5——恶性。提取7 326个3维肺结节作为本次实验的数据集,并且按照8:2比例将数据分成训练集和测试集进行验证,即训练集中共有5 861个肺结节,测试集中共有1 465个肺结节。具体数据分布如表 1所示。
表 1
5类肺结节训练与测试集数据
Table 1
Sample data for training and testing of five types of pulmonary nodules
独立占位 | 毛玻璃影 | 血管黏连型 | 胸腔黏连型 | ||||||||||||||||||||
1 | 2 | 3 | 4 | 5 | 1 | 2 | 3 | 4 | 5 | 1 | 2 | 3 | 4 | 5 | 1 | 2 | 3 | 4 | 5 | ||||
训练样本 | 584 | 271 | 215 | 188 | 629 | 170 | 296 | 189 | 436 | 327 | 206 | 153 | 234 | 133 | 311 | 467 | 241 | 134 | 326 | 351 | |||
测试样本 | 123 | 102 | 21 | 97 | 76 | 101 | 54 | 67 | 47 | 111 | 88 | 35 | 49 | 57 | 82 | 76 | 87 | 69 | 38 | 85 |
4.2 实验结果与分析
本次实验中将肺结节的1、2等级判别为良性,3、4、5等级判定为恶性之后进行实验,这里引入4个衡量指标,分别是准确率(ACC)、特异度(SPE)、灵敏度(SEN)和受试者工作特征曲线下的面积(AUC)值。ACC代表分类模型对正、反例正确分类的比例;真阴性率(SPE, 也称特导度)代表分类模型对反例正确分类的比例;真阳性率(SEN,也称敏感度)代表分类模型对正例正确分类的比例;AUC代表曲线ROC(receiver operating characteristic curve)下方的面积。ROC曲线能使分类结果动态呈现出来,它的纵坐标表示SEN,横坐标表示假阳性率(1-特异度)。为了能够对ROC曲线精确地表达,可以使用AUC值进行表示,AUC取值范围通常在0.51之间,值越大代表分类能力越强。图 12展示了采用不同方法的AUC值比较;表 2展示了本文方法对测试数据集中不同类别的肺结节的分类;表 3展示了不同方法中ACC、SPE和SEN的比较。
表 2
本文方法对测试数据集中不同类别的肺结节的分类
Table 2
Classification of different types of pulmonary nodules in the test data set by our method
位置 | 数量 | TP | FN | FP | TN | SEN/% | SPE/% | ACC/% |
1 | 419 | 392 | 27 | 487 | 842 | 93.55 | 63.35 | 70.59 |
2 | 380 | 348 | 32 | 89 | 412 | 91.57 | 82.23 | 86.26 |
3 | 311 | 254 | 57 | 70 | 339 | 81.67 | 82.88 | 82.36 |
4 | 355 | 302 | 53 | 195 | 320 | 85.07 | 62.13 | 71.49 |
总数 | 1 465 | 1 296 | 169 | 841 | 1 913 | 88.46 | 69.46 | 76.06 |
注:TP为正确分类正例的数量;FN为错误分类反例的数量;FP为错误分类正例的数量;TN为正确分类反例的数量。 |
表 3
不同方法对肺结节分类的比较
Table 3
Comparison of lung nodule classification by different methods
对以上数据分析可知,本文方法的SEN达到88.46%,属于次优,但是SPE和ACC两个指标达到了最优,而且AUC的值是0.84,也达到了次优,因此所提方法对肺结节的识别力较高。
5 结论
本文在基于容斥映射算法将重建肺结节映射到单位球面的基础上,构建了肺结节在球面上的纹理和形状特征并进行向量化建模。提出了一种基于网格向量化特征构造的方法,一方面用网格-LBP计算球表面纹理特征并通过构建环的方法将球面纹理特征进行向量化。另一方面在3维肺结节重建过程中构建形状能量损耗,提取肺结节在重建过程中形状的特征并向量化。最后在基于改进深度森林的训练框架对纹理与形状信息进行训练的过程中,提出了一种构建多尺度同心有序环的扫描方法代替传统的多粒度扫描方法完成了肺结节良恶性判断,结果显示SEN达到了88.46%,SPE和ACC分别是69.46%和76.06%,综合来看对肺结节的良恶性识别效果比较理想, 说明本文方法具有可行性。本文方法不仅考虑了数据的预处理过程,而且利用两个肺结节特征对其良恶性进行检测,相比于传统方法对一个特征进行检测的准确率要高,同时为3维模型中特征的提取及向量化提供了一个有效的方法。尽管如此,目前该方法的准确度受限于医生标定结节与周边组织边界的精确度,一旦在序列中单幅CT图像上存在误差则直接影响球面纹理晶格取值,故在未来的工作中需着重考虑如何使用更为有效的图像分割预处理手段从而增加球面纹理表达的准确性。
参考文献
-
[1] Li M Y, Zhang N, Pan B, et al. Hyperspectral image classification based on deep forest and spectral-spatial cooperative feature[M]//Zhao Y, Kong X W, Taubman D. Image and Graphics. Cham: Springer, 2017: 325-336.[DOI: 10.1007/978-3-319-71598-8_29]
-
[2] Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press, 2014: 2672-2680. http://dl.acm.org/citation.cfm?id=2969125
-
[3] Lenc K, Vedaldi A. R-CNN minusR[J]. eprint arXiv: 1506.06981, 2015. http://www.mendeley.com/catalog/rcnn-minus-r/
-
[4] Ren S Q, He K M, Girshick R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39(6): 1137–1149. [DOI:10.1109/TPAMI.2016.2577031]
-
[5] Courtney K K. Yolo:the year of the learning organization[J]. AALL Spectrum, 2012, 16(3): 20–24.
-
[6] Goodfellow, I J, Pouget-Abadie J, Mirza M, et al. Generativeadversarialnets[C]//Proceedings of International Conference on Neural Information Processing Systems. MIT Press, 2014: 2672-2680[DOI: 10.1016/j.neucom.2014.01.064]
-
[7] Liang Y, Li J P, Guo K, et al. A VEA encryption algorithm design based on two-dimensional logistic chaos mapping[J]. Computer Applications and Software, 2011, 28(10): 111–113. [梁元, 李建平, 郭科, 等. 一种基于二维Logistic混沌映射的VEA加密算法设计[J]. 计算机应用与软件, 2011, 28(10): 111–113. ] [DOI:10.3969/j.issn.1000-386X.2011.10.031]
-
[8] Chang J, Scherer S. Learning representations of emotional speech with deep convolutional generative adversarial networks[J]. arXiv preprint arXiv: 1705.02394, 2017.
-
[9] Lin T Y, Dollár P, Girshick R, et al. Feature pyramid networks for object detection[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 936-944.[DOI: 10.1109/CVPR.2017.106]
-
[10] Ji S W, Xu W, Yang M, et al. 3D convolutional neural networks for human action recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 221–231. [DOI:10.1109/TPAMI.2012.59]
-
[11] Wu M D. Research on 3D face reconization based on deep learning[D]. Harbin: Harbin Institute of Technology, 2016. [吴梦蝶.基于深度学习的三维人脸识别方法研究[D].哈尔滨: 哈尔滨工业大学, 2016.] http://www.cqvip.com/QK/88711X/201703/674297851.html
-
[12] Kass M, Witkin A, Terzopoulos D. Snakes: active models[J]. Int. J. of Comp, 1987 Vision, vol. 1, pp. 321-331.
-
[13] Fang Q Q, Boas D A. Tetrahedral mesh generation from volumetric binary and grayscale images[C]//2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. Boston, MA, USA: IEEE, 2009: 1142-1145.[DOI: 10.1109/ISBI.2009.5193259]
-
[14] Werghi N, Berretti S, del Bimbo A. The mesh-LBP:a framework for extracting local binary patterns from discrete manifolds[J]. IEEE Transactions on Image Processing, 2015, 24(1): 220–235. [DOI:10.1109/TIP.2014.2370253]
-
[15] Wang X Y, Han T X, Yan S C. An HOG-LBP human detector with partial occlusion handling[C]//Proceedings of 2009 IEEE 12th International Conference on Computer Vision. Kyoto, Japan: IEEE, 2009: 32-39.[DOI: 10.1109/ICCV.2009.5459207]
-
[16] Ojala T, Pietikäinen M, Harwood I. A comparative study of texture measures with classification based on featured distributions[J]. Pattern Recognition, 1996, 29(1): 51–59. [DOI:10.1016/0031-3203(95)00067-4]
-
[17] Valery A, Rauch E F, Pofelski A, et al. Dealing with multiple grains in tem lamellae thickness for microstructure analysis using scanning precession electron diffraction[J]. Microscopy and Microanalysis, 2015, 21(S3): 1243–1244. [DOI:10.1017/S143192761500700X]
-
[18] Zhou Z H, Feng J. Deep Forest: towards an alternative to deep neural networks[C]//Proceedings of the 26th International Joint Conference on Artificial Intelligence, Nanjing, China, IJCAI, 2017: 3553-3559.[DOI: 10.24963/ijcai.2017/497]