Print

发布时间: 2019-07-16
摘要点击次数:
全文下载次数:
DOI: 10.11834/jig.180553
2019 | Volume 24 | Number 7




    图像分析和识别    




  <<上一篇 




  下一篇>> 





联合标签预测与判别投影学习的半监督典型相关分析
expand article info 周凯伟, 万建武, 王洪元, 马宏亮
常州大学信息科学与工程学院, 常州 213000

摘要

目的 典型相关分析是一种经典的多视图学习方法。为了提高投影方向的判别性能,现有典型相关分析方法通常采用引入样本标签信息的策略。然而,获取样本的标签信息需要付出大量的人力与物力,为此,提出了一种联合标签预测与判别投影学习的半监督典型相关分析算法。方法 将标签预测与模型构建相融合,具体地说,将标签预测融入典型相关分析框架中,利用联合学习框架学得的标签矩阵更新投影方向,进而学得的投影方向又重新更新标签矩阵。标签预测与投影方向的学习过程相互依赖、交替更新,预测标签不断地接近其真实标签,有利于学得最优的投影方向。结果 本文方法在AR、Extended Yale B、Multi-PIE和ORL这4个人脸数据集上分别进行实验。特征维度为20时,在AR、Extended Yale B、Multi-PIE和ORL人脸数据集上分别取得87%、55%、83%和85%识别率。取训练样本中每人2(3,4,5)幅人脸图像为监督样本,提出的方法识别率在4个人脸数据集上均高于其他方法。训练样本中每人5幅人脸图像为监督样本,在AR、Extended Yale B、Multi-PIE和ORL人脸数据集上分别取得94.67%、68%、83%和85%识别率。实验结果表明在训练样本标签信息较少情况下以及特征降维后的维数较低的情况下,联合学习模型使得降维后的数据最大限度地保存更加有效的信息,得到较好的识别结果。结论 本文提出的联合学习方法提高了学习的投影方向的判别性能,能够有效地处理少量的有标签样本和大量的无标签样本的情况以及解决两步学习策略的缺陷。

关键词

典型相关分析; 标签预测; 判别投影; 联合学习; 半监督

Joint label prediction and discriminant projection learning for semi-supervised canonical correlation analysis
expand article info Zhou Kaiwei, Wan Jianwu, Wang Hongyuan, Ma Hongliang
School of Information Science and Technology, Changzhou University, Changzhou 213000, China
Supported by: National Natural Science Foundation of China(61502058, 61572085)

Abstract

Objective Canonical correlation analysis (CCA) is a classic method of multi-view learning. Existing CCA-based methods often adopt the strategy of embedding the label information of samples into the models to improve the discriminative capability of the learning projection direction. However, obtaining the label information of data in real life is highly difficult and requires substantial manpower and material resources. In this respect, scholars have proposed the model of semi-supervised canonical correlation analysis, which can utilize limited number of labeled data and a quantity of unlabeled ones for training to learn the projection direction. However, the existing models of semi-supervised canonical correlation analysis adopt a two-step learning strategy. The model is developed after label prediction. Thus, the processes of label prediction and model development are independent. Using the prediction label of the unlabeled sample to construct the model leads to local optimization of the projection direction, thereby affecting the next classification results. This work proposes joint label prediction and discriminant projection learning for semi-supervised canonical correlation analysis to solve the semi-supervised learning problem and shortcomings of the two-step learning strategy. Method The algorithm combines label prediction with model development. Specifically, the label prediction is integrated into the framework of canonical correlation analysis. The label matrix of the training samples learned by the joint learning framework is used to update the projection direction. Then, the learned projection direction renews the labels of unlabeled data. The learning process of the label prediction and the projection direction depend on each other and are alternately updated. The predicted tag value should be as close as possible to its true value, which is beneficial to learning the optimal projection direction. The optimization of the joint learning framework adopts an alternate iterative strategy to achieve optimal values for the predicted label and projection direction. The discriminant features of the testing images are extracted from the discriminant projection direction acquired by the joint learning framework. Finally, the discriminant features of the testing images are categorized by the classifier. Result Experiments regarding the proposed algorithm are performed on four face datasets, including AR, Extended Yale B, Multi-PIE, and ORL. The experiment results show that the proposed method can obtain enhanced recognition outcomes only by few features and labeled data. Specifically, three face images of each person in the training samples are selected as supervised samples to analyze the effects of experimental results from the different sample dimensions. Face recognition is high as face image dimension is high in any method. The face recognition rate of the proposed algorithm exhibits significant advantages in low dimension compared with other methods. When the feature dimension is 20, the recognition rates on the AR, Extend-ed Yale B, Multi-PIE, and ORL face datasets are 87%, 55%, 83%, and 85%, respectively. The 2 (3, 4, 5) face images of each person in the training samples are selected as supervised samples to analyze the effects of experimental results from the different numbers of labeled data. Face recognition is high as the number of labeled face image is large in any method. Five face images per person in the training sample were supervised samples. The recognition rates on the AR, Extended Yale B, Multi-PIE, and ORL face datasets are 94.67%, 68%, 83%, and 85%, respectively. Conclusion The work proposes a joint learning method that render the learning projection direction highly discriminative, which can effectively handle a limited number of labeled data and a quantity of the unlabeled data and solve the shortcomings of the two-step learning strategy. The experiment results on the AR, Extended Yale B, Multi-PIE, and ORL face datasets demonstrate that the recognition rate of the proposed method is significantly higher than those of other methods. This condition occurs when the supervised samples in the training samples are scarce, and the dimensionality of the data features after dimension reduction is low. The convergence of the proposed iterative algorithm is confirmed by experiments. This finding shows that the feature extracted using discriminant projection direction learned by the joint learning model leads data after dimension reduction to keep the information inherent in the data as much as possible. Finally, enhanced classification results can be obtained using the classifier to categorize the extracted features.

Key words

canonical correlation analysis(CCA); label prediction; discriminative projection; joint leaning; semi-supervised

0 引言

近年来,多视图[1]学习引起了学术界的广泛关注。多视图数据[2-3],如网页数据可以由文本、图像等多个异构视图来表示,由于视图数据之间存在一致性和互补性,可提供比单视图数据更加丰富、多样的信息。多视图学习[4]关注如何将多视图异构数据进行有效融合来提高学习器的性能。

目前,多视图学习的主要研究方法包括:前期融合型方法、视图不一致性法以及多视图子空间学习等。前期融合型方法主要是多核学习[5-6],主要将每个视图上产生的核函数以线性或非线性的方式组合以融合多视图特征信息;视图不一致性法主要是协同训练[7]。它是利用多个视图生成多个分类器,通过分类器之间的协同作用来实现对数据的分类[8];而子空间学习是通过求取多视图数据公共的隐空间来建立多视图数据之间的关系。本文关注的是多视图子空间学习方法[9-10]

典型相关分析(CCA)是一个经典的多视图子空间学习方法。CCA对每个视图进行线性变化,使得变化后的视图间相关性最大。目前,CCA成功地应用于诸如人脸识别[11],图像处理[12]以及图像检索[10]等领域。进一步地,学者们提出非线性典型相关分析方法。例如,新型局部保持典型相关分析(ALPCCA)[13]、局部密度增强典型相关分析(LDECCA)[14]等。然而,这些方法都是无监督的,为了增强CCA的判别力,研究者通过引入样本的标签信息提出了鉴别典型相关分析(DCCA)[15]、局部鉴别典型相关分析(LDCCA)[16]、基于局部稀疏表示和线性鉴别分析的典型相关分析[17]等。

在现实生活中,获取样本的标签信息比较困难,需要大量的人力与物力,为此,有学者提出了半监督典型相关分析方法。他们常采取两步学习的策略,即先采用标签预测的方法获取无标记样本的标记信息,再利用学得的标签进行CCA模型构建。例如,基于标签传播的半监督典型相关分析(LPbSCCA)[18],先利用稀疏表示技术推断无标签样本的软标签信息,再将学得的软标签构建CCA模型。基于代价敏感的半监督典型相关分析[19],则使用l_{2}范数获取无标签样本的软标签,然后将误分类的代价嵌入典型相关分析框架中。在上述方法中,标签预测与模型构建过程相互独立,预测的标签可能不利于后续的模型构建,会导致局部最优的投影方向。

为此,本文提出了一种联合标签预测与判别投影学习的半监督典型相关分析算法(JLPDPSCCA)。具体地,将预测标签过程融入CCA框架中,根据学得的标签信息更新投影方向;进一步,利用更新后的投影方向重新估计无标签样本的标签信息。上述过程相互依赖、交替更新,使得估计的标签矩阵更接近真实的标签矩阵,进而提高投影方向的判别性能。

1 典型相关分析

给定一个多视图数据集$\left. {\left\{ {\left( {{\mathit{\boldsymbol{x}}_1}, {\mathit{\boldsymbol{y}}_1}} \right), \left( {{\mathit{\boldsymbol{x}}_2}, {\mathit{\boldsymbol{y}}_2}} \right), \cdots } \right., \left( {{\mathit{\boldsymbol{x}}_n}, {\mathit{\boldsymbol{y}}_n}} \right)} \right\}$,其中,一个视图的样本为$\mathit{\boldsymbol{X}} = \left. {\left\{ {{\mathit{\boldsymbol{x}}_1}} \right., {\mathit{\boldsymbol{x}}_2}, \cdots, {\mathit{\boldsymbol{x}}_n}} \right\} \in {{\bf{R}}^{p \times n}}$,另一个视图的样本为$\mathit{\boldsymbol{Y}} = \left. {\left\{ {{\mathit{\boldsymbol{y}}_1}} \right.{\mathit{\boldsymbol{y}}_2}, \cdots, {\mathit{\boldsymbol{y}}_n}} \right\} \in {{\bf{R}}^{q \times n}}$。假定所有样本数据都已经中心化,典型相关分析希望找到两个投影方向${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$使得投影后变量之间的相关性最大。具体地,CCA的目标函数可以表示为

$ \mathop {\max }\limits_{{\mathit{\boldsymbol{W}}_x},{\mathit{\boldsymbol{W}}_y}} \frac{{\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{C}}_{xy}}{\mathit{\boldsymbol{W}}_y}}}{{\sqrt {\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{C}}_{xx}}{\mathit{\boldsymbol{W}}_x}} \sqrt {\mathit{\boldsymbol{W}}_y^{\rm{T}}{\mathit{\boldsymbol{C}}_{yy}}{\mathit{\boldsymbol{W}}_y}} }} $ (1)

式中,视图$\mathit{\boldsymbol{X}}$的协方差矩阵表示为${\mathit{\boldsymbol{C}}_{xx}} = \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{X}}^{\rm{T}}}$,视图$\mathit{\boldsymbol{Y}}$的协方差矩阵表示为${\mathit{\boldsymbol{C}}_{yy}} = \mathit{\boldsymbol{Y}}{\mathit{\boldsymbol{Y}}^{\rm{T}}}$,视图间的协方差矩阵表示为${\mathit{\boldsymbol{C}}_{xy}} = \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{Y}}^{\rm{T}}}$

2 联合标签预测与判别投影学习的半监督典型相关分析

假设多视图数据集是半监督的,训练集中只有少量监督样本,大量训练样本的标签信息未知。即在多视图数据集中,$\mathit{\boldsymbol{X}} = \left\{ {{\mathit{\boldsymbol{X}}_{{n_l}}}, {\mathit{\boldsymbol{X}}_{{n_u}}}} \right\}$$\mathit{\boldsymbol{Y}} = \left\{ {{\mathit{\boldsymbol{Y}}_{{n_l}}}, {\mathit{\boldsymbol{Y}}_{{n_u}}}} \right\}$${{\mathit{\boldsymbol{X}}_{{n_l}}}}$${{\mathit{\boldsymbol{Y}}_{{n_l}}}}$分别表示为$n_{l}$个监督样本,$X_{n_{u}}$$Y_{n_{u}}$分别表示为$n_{u}$个无监督样本,且${n_l} \ll {n_u}$。定义$\mathit{\boldsymbol{F}} = \left[{{\mathit{\boldsymbol{F}}_1}, {\mathit{\boldsymbol{F}}_2}, \cdots, {\mathit{\boldsymbol{F}}_n}} \right]$为样本的预测标签矩阵,其中${\mathit{\boldsymbol{F}}_i} \in {{\bf{R}}^\mathit{c}}$${\mathit{\boldsymbol{F}}_{\mathit{ij}}}{\rm{ = 1}}$表示第$i$样本属于第$j$类;否则${\mathit{\boldsymbol{F}}_{\mathit{ij}}}{\rm{ = 0}}$

2.1 联合学习框架

为了解决多视图学习中的半监督学习问题,提出一种将标签预测与模型构建融合的联合学习框架,该联合学习框架的基本思想是将标签预测嵌入判别典型相关分析模型中,采用交替迭代的方法联合学习投影方向和预测标签矩阵。该联合学习框架的一般形式可表示为

$ J\left( {{\mathit{\boldsymbol{W}}_x},{\mathit{\boldsymbol{W}}_y},\mathit{\boldsymbol{F}}} \right) = \mathop {\min }\limits_{{\mathit{\boldsymbol{W}}_x},{\mathit{\boldsymbol{W}}_y},\mathit{\boldsymbol{F}}} - H\left( {{\mathit{\boldsymbol{W}}_x},{\mathit{\boldsymbol{W}}_y},\mathit{\boldsymbol{F}}} \right) + R\left( \mathit{\boldsymbol{F}} \right) $ (2)

式中,$H\left( {{\mathit{\boldsymbol{W}}_x}, {\mathit{\boldsymbol{W}}_y}, \mathit{\boldsymbol{F}}} \right)$为判别相关分析项。假设预测标签矩阵$\mathit{\boldsymbol{F}}$已知。$H(·)$函数通过利用标签预测学得的标签矩阵$\mathit{\boldsymbol{F}}$来提高投影方向${{\mathit{\boldsymbol{W}}_x}}$${{\mathit{\boldsymbol{W}}_y}}$的判别能力;进一步,学得的判别投影方向${{\mathit{\boldsymbol{W}}_x}}$${{\mathit{\boldsymbol{W}}_y}}$,又可用以重新更新标签矩阵$\mathit{\boldsymbol{F}}$。上述过程,交替学习,直至收敛,可学得最优的投影方向${{\mathit{\boldsymbol{W}}_x}}$${{\mathit{\boldsymbol{W}}_y}}$。为了提高标签预测的性能,式(2)引入了标签预测正则化项$R(\mathit{\boldsymbol{F}})$

2.2 判别相关分析

对于式(2)中的$H$函数,采用多种方法定义,为了简单方便,采用DCCA[15]模型定义目标函数

$ H\left( {{\mathit{\boldsymbol{W}}_x},{\mathit{\boldsymbol{W}}_y},\mathit{\boldsymbol{F}}} \right) = \mathop {\max }\limits_{{\mathit{\boldsymbol{W}}_x},{\mathit{\boldsymbol{W}}_y}} \frac{{\mathit{\boldsymbol{W}}_x^{\rm{T}}{{\mathit{\boldsymbol{\tilde C}}}_{xy}}{\mathit{\boldsymbol{W}}_y}}}{{\sqrt {\left( {\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{C}}_{xx}}{\mathit{\boldsymbol{W}}_x}} \right)\left( {\mathit{\boldsymbol{W}}_y^{\rm{T}}{\mathit{\boldsymbol{C}}_{yy}}{\mathit{\boldsymbol{W}}_y}} \right)} }} $ (3)

式中,协方差矩阵${\mathit{\boldsymbol{\tilde C}}_{xy}} = \mathit{\boldsymbol{XF}}{\mathit{\boldsymbol{F}}^{\rm{T}}}{\mathit{\boldsymbol{Y}}^{\rm{T}}}, \;{\mathit{\boldsymbol{C}}_{xx}} = \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{X}}^{\rm{T}}}, \;{\mathit{\boldsymbol{C}}_{yy}} = \mathit{\boldsymbol{Y}}{\mathit{\boldsymbol{Y}}^{\rm{T}}}$。根据DCCA的结论[15],式(3)可最小化类间相关性,最大化类内相关性,使得投影方向更加判别。

2.3 标签估计正则化项

为了指导标签预测过程,引入监督样本的标签矩阵$\mathit{\boldsymbol{L}} = \left\{ {{\mathit{\boldsymbol{l}}_1}, {\mathit{\boldsymbol{l}}_2}, \cdots, {\mathit{\boldsymbol{l}}_{{n_l}}}} \right\}$,此时,样本的预测标签应该尽可能地接近真实标签。因此,标签估计正则化项可表示为

$ \begin{array}{*{20}{c}} {R\left( \mathit{\boldsymbol{F}} \right) = \mathop {\min }\limits_F \mu \left\| {\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{L}}} \right\|_{\rm{F}}^2 + }\\ {\alpha \;{\rm{tr}}{{\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)}^{\rm{T}}}\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)}\\ {{\rm{s}}.\;{\rm{t}}.\;\;\;{\mathit{\boldsymbol{F}}_{ij}} \ge 0} \end{array} $ (4)

式中,$μ$$α$为平衡参数。这里,预测标签$\mathit{\boldsymbol{F}}$中的元素为0、1离散值。因此,求解$\mathit{\boldsymbol{F}}$是一个NP-hard问题。为了方便计算,采用与文献[20-21]相似技术,将$\mathit{\boldsymbol{F}}$放松为连续变量。为保证预测标签矩阵$\mathit{\boldsymbol{F}}$中任一元素非负,采用Yang等人[22-23]提出的方法,对预测标签矩阵$\mathit{\boldsymbol{F}}$元素施加非负约束,非负约束有利于提升算法的性能。同时,${\mathit{\boldsymbol{F}}_i}$(表示样本${\mathit{\boldsymbol{x}}_i}$的类别)只有一个元素不为0,其余各元素皆为0。根据文献[22-23]技术,约束预测标签矩阵$\mathit{\boldsymbol{F}}$是列正交的。

2.4 目标函数

根据上述判别相关分析以及标签估计项,式(2)所定义的一般形式可表示为

$ \begin{array}{*{20}{c}} {J\left( {{\mathit{\boldsymbol{W}}_x},{\mathit{\boldsymbol{W}}_y},\mathit{\boldsymbol{F}}} \right) = \mathop {\min }\limits_{{\mathit{\boldsymbol{W}}_x},{\mathit{\boldsymbol{W}}_y},\mathit{\boldsymbol{F}}} - \mathit{\boldsymbol{W}}_x^{\rm{T}}{{\mathit{\boldsymbol{\tilde C}}}_{xy}}{\mathit{\boldsymbol{W}}_y} + }\\ {\mu \left\| {\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{L}}} \right\|_{\rm{F}}^2 + \alpha \;{\rm{tr}}{{\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)}^{\rm{T}}}\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)}\\ {{\rm{s}}.\;{\rm{t}}.\;\;\;\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{C}}_{xx}}{\mathit{\boldsymbol{W}}_x} = \mathit{\boldsymbol{I}},\mathit{\boldsymbol{W}}_y^{\rm{T}}{\mathit{\boldsymbol{C}}_{yy}}{\mathit{\boldsymbol{W}}_y} = \mathit{\boldsymbol{I}},{\mathit{\boldsymbol{F}}_{ij}} \ge 0} \end{array} $ (5)

式中,$\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{\tilde C}}_{xy}}{\mathit{\boldsymbol{W}}_y}$表示视图之间相关性,最大化不同视图同类样本相关性以获取鉴别信息,同时约束视图$\mathit{\boldsymbol{X}}$和视图$\mathit{\boldsymbol{Y}}$类内变化最小化,使得同类样本在低维特征空间保持尽可能地近,进一步提升学得的投影方向的鉴别性能。$\left\| {\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{L}}} \right\|_{\rm{F}}^2$表示约束预测标签矩阵$\mathit{\boldsymbol{F}}$尽可能接近真实标签矩阵$\mathit{\boldsymbol{L}}$${\mathop{\rm tr}\nolimits} {\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)^{\rm{T}}} \times \left( {{\mathit{\boldsymbol{F}}^T}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)$是限制预测标签矩阵$\mathit{\boldsymbol{F}}$元素是非负的,且列正交。

2.5 模型求解

采用交替迭代的方法优化求解式(5)所定义的目标函数。具体步骤为:

1) 固定$\mathit{\boldsymbol{F}}$,优化求解投影方向${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$,式(5)所定义的目标函数可转换为

$ \begin{array}{*{20}{c}} {\mathop {\min }\limits_{{\mathit{\boldsymbol{W}}_x},{\mathit{\boldsymbol{W}}_y}} - \mathit{\boldsymbol{W}}_x^{\rm{T}}{{\mathit{\boldsymbol{\tilde C}}}_{xy}}{\mathit{\boldsymbol{W}}_y}}\\ {{\rm{s}}.\;{\rm{t}}.\;\;\;\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{C}}_{xx}}{\mathit{\boldsymbol{W}}_x} = \mathit{\boldsymbol{I}},\mathit{\boldsymbol{W}}_y^{\rm{T}}{\mathit{\boldsymbol{C}}_{yy}}{\mathit{\boldsymbol{W}}_y} = \mathit{\boldsymbol{I}}} \end{array} $ (6)

采用拉格朗日乘子法,式(6)的优化问题可表述为式(7)所定义的广义特征值问题[10],即

$ \left[ {\begin{array}{*{20}{c}} \mathit{\boldsymbol{0}}&{{\mathit{\boldsymbol{C}}_{xy}}}\\ {{\mathit{\boldsymbol{C}}_{yx}}}&\mathit{\boldsymbol{0}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{W}}_x}}\\ {{\mathit{\boldsymbol{W}}_y}} \end{array}} \right] = \beta \left[ {\begin{array}{*{20}{c}} {{{\mathit{\boldsymbol{\tilde C}}}_{xy}}}&\mathit{\boldsymbol{0}}\\ \mathit{\boldsymbol{0}}&{{\mathit{\boldsymbol{C}}_{yx}}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{W}}_x}}\\ {{\mathit{\boldsymbol{W}}_y}} \end{array}} \right] $ (7)

进一步,对式(7)利用广义特征值分解,可得到判别投影方向${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$

2) 固定${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$,优化求解预测标签矩阵$\mathit{\boldsymbol{F}}$,式(5)可转换为

$ \begin{array}{*{20}{c}} {\mathop {\min }\limits_\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{W}}_x^{\rm{T}}\mathit{\boldsymbol{XF}}{\mathit{\boldsymbol{F}}^{\rm{T}}}{\mathit{\boldsymbol{Y}}^{\rm{T}}}{\mathit{\boldsymbol{W}}_y} + \mu {\rm{tr}}\left\{ {{{\left( {\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{L}}} \right)}^{\rm{T}}}\mathit{\boldsymbol{D}}\left( {\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{L}}} \right)} \right\} + }\\ {\alpha \;{\rm{tr}}{{\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)}^{\rm{T}}}\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)}\\ {{\rm{s}}.\;{\rm{t}}.\;\;\;\;{\mathit{\boldsymbol{F}}_{ij}} \ge 0} \end{array} $ (8)

式中,$\mathit{\boldsymbol{D}} = \left[{\begin{array}{*{20}{l}} \mathit{\boldsymbol{I}}&{\bf{0}}\\ {\bf{0}}&{\bf{0}} \end{array}} \right] \in {{\bf{R}}^{n \times n}}$$\mathit{\boldsymbol{I}}$表示一个单位矩阵并且$\mathit{\boldsymbol{I}} \in {{\bf{R}}^{n \times n}}$

为求解式(8),首先采用梯度下降法计算$\mathit{\boldsymbol{F}}$的梯度

$ \begin{array}{*{20}{c}} {\frac{{\partial J}}{{\partial {\mathit{\boldsymbol{F}}_{ij}}}} = - \left( {{\mathit{\boldsymbol{X}}^{\rm{T}}}{\mathit{\boldsymbol{W}}_x}\mathit{\boldsymbol{W}}_y^{\rm{T}}\mathit{\boldsymbol{YF}} + {\mathit{\boldsymbol{Y}}^{\rm{T}}}{\mathit{\boldsymbol{W}}_y}\mathit{\boldsymbol{W}}_x^{\rm{T}}\mathit{\boldsymbol{XF}} - } \right.}\\ {{{\left. {2\mu \mathit{\boldsymbol{D}}\left( {\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{Y}}} \right) - 2\alpha \mathit{\boldsymbol{F}}\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)} \right)}_{ij}}} \end{array} $ (9)

为了学习得到非负的预测标签矩阵$\mathit{\boldsymbol{F}}$,与非负矩阵分解[24]相似,定义更新规则为

$ {F_{ij}} \leftarrow {F_{ij}} \times \frac{{{{\left( {\mathit{\boldsymbol{KF}} + \mathit{\boldsymbol{DL}} + 2\alpha \mathit{\boldsymbol{F}}} \right)}_{ij}}}}{{{{\left( {\mathit{\boldsymbol{DF}} + 2\alpha \mathit{\boldsymbol{F}}{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}}} \right)}_{ij}}}} $ (10)

式中,$\mathit{\boldsymbol{K}} = {\mathit{\boldsymbol{X}}^{\rm{T}}}{\mathit{\boldsymbol{W}}_x}\mathit{\boldsymbol{W}}_y^{\rm{T}}\mathit{\boldsymbol{Y}} + {\mathit{\boldsymbol{Y}}^{\rm{T}}}{\mathit{\boldsymbol{W}}_y}\mathit{\boldsymbol{W}}_x^{\rm{T}}\mathit{\boldsymbol{X}}$

进一步,为了使预测标签矩阵$\mathit{\boldsymbol{F}}$的元素为0或1,通过式(11)将预测标签矩阵$\mathit{\boldsymbol{F}}$离散化,即

$ {F_{ij}} = \left\{ {\begin{array}{*{20}{c}} \begin{array}{l} 1\\ 0 \end{array}&\begin{array}{l} {F_{ij}}\;是\;{\mathit{\boldsymbol{F}}_i}\;的最大值\\ 其他 \end{array} \end{array}} \right. $ (11)

综上所述,本文算法的训练流程如下:

输入:训练样本对{$\mathit{\boldsymbol{X}}, \mathit{\boldsymbol{Y}}$},监督样本的真实标签矩阵$\mathit{\boldsymbol{L}}$,收敛阈值$T_{0}$

输出:投影矩阵${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$,预测标签矩阵$\mathit{\boldsymbol{F}}$

1) 随机初始化预测标签矩阵$\mathit{\boldsymbol{F}}$,投影矩阵${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$,目标函数值$J_{0}$

2) 固定$\mathit{\boldsymbol{F}}$,通过式(7)更新${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$

3) 固定${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$,通过式(10)更新$\mathit{\boldsymbol{F}}$

4) 归一化预测标签矩阵$\mathit{\boldsymbol{F}} = \mathit{\boldsymbol{F}}{\left( {\mathit{\boldsymbol{F}}{\mathit{\boldsymbol{F}}^{\rm{T}}}} \right)^{ - \frac{1}{2}}}$

5) 根据式(5),计算目标函数值$J$,如果$\left| {J - {J_0}} \right| \le {T_0}$,目标函数收敛,转步骤6);否则,$J_{0}=J$,转步骤2)。

6) 根据式(11),计算样本的预测标签矩阵$\mathit{\boldsymbol{F}}$

7) 通过式(7)计算${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$

2.6 特征组合及分类

本节对投影后的两个视图的特征进行组合。利用求解得到的投影矩阵${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$进行特征组合,有串行组合$\left( {\begin{array}{*{20}{c}} {\mathit{\boldsymbol{W}}_x^{\rm{T}}\mathit{\boldsymbol{X}}}\\ {\mathit{\boldsymbol{W}}_y^{\rm{T}}\mathit{\boldsymbol{Y}}} \end{array}} \right)$和并行组合$\mathit{\boldsymbol{W}}_x^{\rm{T}}\mathit{\boldsymbol{X}} + \mathit{\boldsymbol{W}}_y^{\rm{T}}\mathit{\boldsymbol{Y}}$两种方式。采用串行方式,将两个视图特征串行排列组合成整体特征$\mathit{\boldsymbol{Z}} = \left[{{\mathit{\boldsymbol{z}}_1}, {\mathit{\boldsymbol{z}}_2}, \cdots, {\mathit{\boldsymbol{z}}_n}} \right] = \left( {\begin{array}{*{20}{c}} {{x_1}, \cdots, {x_n}}\\ {{y_1}, \cdots, {y_n}} \end{array}} \right) \in {{\bf{R}}^{(p + q) \times n}}$,对于组合的特征利用最近分类器对特征进行分类。

2.7 模型测试流程

本节描述模型测试流程如下:

1) 根据学得的特征投影矩阵${\mathit{\boldsymbol{W}}_x}$${\mathit{\boldsymbol{W}}_y}$,提取待测试样本${\mathit{\boldsymbol{X}}_{{\rm{ test }}}}$${\mathit{\boldsymbol{Y}}_{{\rm{ test }}}}$的特征,即$\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{X}}_{{\rm{test}}}}$$\mathit{\boldsymbol{W}}_y^{\rm{T}}{\mathit{\boldsymbol{Y}}_{{\rm{ test }}}}$。按照同样方法提取训练样本特征$\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{X}}_{{\rm{train}}}}$$\mathit{\boldsymbol{W}}_y^{\rm{T}}{\mathit{\boldsymbol{Y}}_{{\rm{ train }}}}$

2) 特征组合。将具体的测试样本${\mathit{\boldsymbol{X}}_{{\rm{test}}}}$${\mathit{\boldsymbol{Y}}_{{\rm{test}}}}$的特征$\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{X}}_{{\rm{test}}}}$$\mathit{\boldsymbol{W}}_y^{\rm{T}}{\mathit{\boldsymbol{Y}}_{{\rm{test}}}}$ ,利用串行方式,组合成整体特征${\mathit{\boldsymbol{Z}}_{{\rm{ test }}}} = \left( {\begin{array}{*{20}{c}} {\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{X}}_{{\rm{ test }}}}}\\ {\mathit{\boldsymbol{W}}_y^{\rm{T}}{\mathit{\boldsymbol{Y}}_{{\rm{ test }}}}} \end{array}} \right)$。按照上述特征组合方法将训练样本组合成整体特征${\mathit{\boldsymbol{Z}}_{{\rm{train}}}} = \left( {\begin{array}{*{20}{c}} {\mathit{\boldsymbol{W}}_x^{\rm{T}}{\mathit{\boldsymbol{X}}_{{\rm{train}}}}}\\ {\mathit{\boldsymbol{W}}_y^{\rm{T}}{\mathit{\boldsymbol{Y}}_{{\rm{train}}}}} \end{array}} \right)$

3) 采用以欧氏距离作为度量的最近邻分类器进行分类,具体地从${\mathit{\boldsymbol{Z}}_{{\rm{ test }}}}$取一个测试特征与全部${\mathit{\boldsymbol{Z}}_{{\rm{ train }}}}$计算欧氏距离得到其类别,最终得到${\mathit{\boldsymbol{Z}}_{{\rm{ test }}}}$的分类结果。

2.8 JLPDPSCCA的多视图扩展

在现实复杂场景下,同一个对象往往从多个角度去表示,即几个(多于两个)不同形式的数据表示同一个对象的不同特性。为了解决这个问题,将JLPDPSCCA扩展至多视图模型JLPDPSMCCA,它能够同时处理多个视图数据。

假定$n$个对象的全部$k$个视图特征为$\left. {\left\{ {{\mathit{\boldsymbol{X}}^{(j)}} \in } \right.{{\bf{R}}^{{d_j} \times n}}} \right\}_{j = 1}^k$,其中${\mathit{\boldsymbol{X}}^{(j)}} = \left[{\mathit{\boldsymbol{x}}_1^{(j)}, \mathit{\boldsymbol{x}}_2^{(j)}, \cdots, \mathit{\boldsymbol{x}}_n^{(j)}} \right]$$\mathit{\boldsymbol{x}}_i^{(j)} \in {{\bf{R}}^{{d_j}}}(i = 1, 2, \cdots, n)$$d_{j}$表示第$j$个视图的特征维数。$\left\{ {\mathit{\boldsymbol{x}}_i^{(j)}} \right\}_{i = 1}^m$$\left\{ {\mathit{\boldsymbol{x}}_i^{(j)}} \right\}_{i = m + 1}^n$分别是第$j$个视图的监督样本和无监督样本。JLPDPSMCCA的目标函数表示为

$ \begin{array}{*{20}{c}} {J\left( {{\mathit{\boldsymbol{W}}^{\left( 1 \right)}},{\mathit{\boldsymbol{W}}^{\left( 2 \right)}}, \cdots ,{\mathit{\boldsymbol{W}}^{\left( k \right)}},\mathit{\boldsymbol{F}}} \right) = }\\ {\mathop {\min }\limits_{{\mathit{\boldsymbol{W}}^{\left( 1 \right)}},{\mathit{\boldsymbol{W}}^{\left( 2 \right)}}, \cdots ,{\mathit{\boldsymbol{W}}^{\left( k \right)}},\mathit{\boldsymbol{F}}} - \sum\limits_{j = 1}^k {\sum\limits_{i = 1,i \ne j}^k {{\mathit{\boldsymbol{W}}^{\left( j \right){\rm{T}}}}{{\mathit{\boldsymbol{\tilde C}}}^{\left( {ji} \right)}}{\mathit{\boldsymbol{W}}^{\left( i \right)}}} } + }\\ {\mu \left\| {\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{L}}} \right\|_{\rm{F}}^2 + \alpha \;{\rm{tr}}{{\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)}^{\rm{T}}}\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)}\\ {{\rm{s}}.\;{\rm{t}}.\;\;\;\sum\limits_{j = 1}^k {{\mathit{\boldsymbol{W}}^{\left( j \right){\rm{T}}}}{\mathit{\boldsymbol{C}}^{\left( {jj} \right)}}{\mathit{\boldsymbol{W}}^{\left( j \right)}}} = \mathit{\boldsymbol{I}},\;\;\;\;\;{F_{ij}} \ge 0} \end{array} $ (12)

式中,${\mathit{\boldsymbol{\tilde C}}^{(ji)}} = {\mathit{\boldsymbol{X}}^{(j)}}\mathit{\boldsymbol{F}}{\mathit{\boldsymbol{F}}^{\rm{T}}}{\mathit{\boldsymbol{X}}^{(i){\rm{T}}}}$表示视图间协方差矩阵,${\mathit{\boldsymbol{C}}^{(jj)}} = {\mathit{\boldsymbol{X}}^{(j)}}{\mathit{\boldsymbol{X}}^{(j){\rm{T}}}}$表示视图内协方差矩阵。通过上述优化制定,JLPDPSCCA可以看做是JLPDPSMCCA的一种特例。当仅有两个视图时,JLPDPSMCCA就变成了JLPDPSCCA。

对于式(12)的优化求解,依旧采用交替迭代的策略。

首先固定$\mathit{\boldsymbol{F}}$,使用拉格朗日乘子法,得到如下的优化问题

$ \begin{array}{*{20}{c}} {L\left( {{\mathit{\boldsymbol{W}}^{\left( 1 \right)}},{\mathit{\boldsymbol{W}}^{\left( 2 \right)}}, \cdots ,{\mathit{\boldsymbol{W}}^{\left( k \right)}},\lambda } \right) = }\\ { - \sum\limits_{j = 1}^k {\sum\limits_{i = 1,i \ne j}^k {{\mathit{\boldsymbol{W}}^{\left( j \right){\rm{T}}}}{{\mathit{\boldsymbol{\tilde C}}}^{\left( {ji} \right)}}{\mathit{\boldsymbol{W}}^{\left( i \right)}}} } + }\\ {\lambda \left( {\sum\limits_{j = 1}^k {{\mathit{\boldsymbol{W}}^{\left( j \right){\rm{T}}}}{\mathit{\boldsymbol{C}}^{\left( {jj} \right)}}{\mathit{\boldsymbol{W}}^{\left( j \right)}} - \mathit{\boldsymbol{I}}} } \right)} \end{array} $ (13)

$\frac{{\partial L}}{{\partial {\mathit{\boldsymbol{W}}^{(j)}}}} = 0$,得到式(14)这个广义特征值问题

$ \sum\limits_{i \ne j} {{{\mathit{\boldsymbol{\tilde C}}}^{\left( {ji} \right)}}{\mathit{\boldsymbol{W}}^{\left( i \right){\rm{T}}}}} = \lambda {\mathit{\boldsymbol{C}}^{\left( {jj} \right)}}{\mathit{\boldsymbol{W}}^{\left( j \right){\rm{T}}}} $ (14)

进一步,对式(14)利用广义特征值分解,可得到第$j$个视图的投影矩阵为${\mathit{\boldsymbol{W}}^{(j)}} \in {{\bf{R}}^{{d_j} \times d}}$

对于优化求解预测标签矩阵$\mathit{\boldsymbol{F}}$,计算预测标签矩阵$\mathit{\boldsymbol{F}}$的梯度

$ \begin{array}{*{20}{c}} {\frac{{\partial J}}{{\partial {\mathit{\boldsymbol{F}}_{ij}}}} = - \left( {\sum\limits_{j = 1}^k {\sum\limits_{i = 1,j \ne i}^k {{\mathit{\boldsymbol{X}}^{\left( j \right){\rm{T}}}}{\mathit{\boldsymbol{W}}^{\left( j \right)}}{\mathit{\boldsymbol{W}}^{\left( i \right){\rm{T}}}}{\mathit{\boldsymbol{X}}^{\left( i \right)}}\mathit{\boldsymbol{F}}} } - } \right.}\\ {{{\left. {2\mu \mathit{\boldsymbol{D}}\left( {\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{Y}}} \right) - 2\alpha \mathit{\boldsymbol{F}}\left( {{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}} - \mathit{\boldsymbol{I}}} \right)} \right)}_{ij}}} \end{array} $ (15)

进一步地,采用与非负矩阵分解[24]相似策略,定义如下的更新规则

$ {\mathit{\boldsymbol{F}}_{ij}} \leftarrow {\mathit{\boldsymbol{F}}_{ij}} \times \frac{{{{\left( {\mathit{\boldsymbol{KF}} + \mathit{\boldsymbol{DL}} + 2\alpha \mathit{\boldsymbol{F}}} \right)}_{ij}}}}{{{{\left( {\mathit{\boldsymbol{DF}} + 2\alpha \mathit{\boldsymbol{F}}{\mathit{\boldsymbol{F}}^{\rm{T}}}\mathit{\boldsymbol{F}}} \right)}_{ij}}}} $ (16)

式中,$\mathit{\boldsymbol{K}} = \sum\limits_{j = 1}^k {\sum\limits_{i = 1, j \ne i}^k {{\mathit{\boldsymbol{X}}^{(j){\rm{T}}}}} } {\mathit{\boldsymbol{W}}^{(j)}}{\mathit{\boldsymbol{W}}^{(i){\rm{T}}}}{\mathit{\boldsymbol{X}}^{(i)}}$

3 实验与分析

为了验证本文方法的有效性,选择AR,Extended Yale B,Multi-PIE和ORL这4个人脸数据集。

3.1 数据集描述

1) AR人脸数据集。该子集包括100个人,每人14幅图像,共1 400幅正脸图像。每个人的表情、遮挡情况和光照情况都不同。所有人脸图像按比例统一裁剪为66×48像素。

2) Extended Yale B人脸数据集。包括38个人2 414幅正面人脸图像。这些图像的光照条件不同,从不同的角度拍摄,每个人约64幅人脸图像。所有人脸图像按比例统一裁剪为32×32像素。

3) Multi-PIE人脸数据集。包括在43种照明条件下,13种不同的姿势与5种不同的表情的68个人的41 368幅图像。每个人有24幅图像,每个姿势有1 632幅图像。所有人脸图像按比例统一裁剪为64×64像素。

4) ORL人脸数据集。包括400幅正面人脸图像。其中每人是10幅图像,这些图像是在不同光照条件下拍摄的,而且每个人的面部表情和面部细节有着不同程度的变化。所有人脸图像按比例统一裁剪为32×32像素。

3.2 对比方法

为了验证本文方法的有效性,实验采用欧氏距离作为度量的最近邻分类器进行分类,近邻数$k$取3,实验重复10次,以平均值作为最终的实验结果。采用如下7种算法,进行实验结果对比。对比方法简要描述如下:

1) 无监督方法。包括典型相关分析[9-10]、新型局部保持典型相关分析[13]、局部增强密度典型相关分析[14]

2) 监督方法。包括鉴别典型相关分析[15]、局部鉴别典型相关分析[16]

3) 半监督方法。包括半监督典型相关分析(SCCA)[25]、基于标签传播的半监督典型相关分析[18]

3.3 参数设置

对于Extended Yale B和Multi-PIE人脸数据集,每个人随机挑选10幅图像作为训练集,剩余的图像作为测试集;对于AR和ORL人脸数据集,每个人随机挑选7幅图像作为训练集,剩余的图像作为测试集。实验中,为了构建多视图数据特征集,本文将原始特征作为$X$集,采用与文献[26]相似技术,利用Daubechies正交小波变化,选取低频分量作为$Y$集。值得注意的是,本文是多特征降维融合方法,实验中选取Daubechies正交小波变化的低频分量作为$Y$集,当然也可以提取LBP[18]特征作为$Y$集。在所有数据集中,每人随机挑选3幅图像作为监督样本,其余的样本作为无标记样本。本文方法的参数$α=μ=80$,收敛阈值${J_0} = {10^{- 3}}$。所有对比方法都使用默认的参数设置。为了去除冗余特征,使用PCA分别将Extended Yale B,AR,Multi-PIE和ORL进行图像预处理。

3.4 算法收敛性分析

采用3.3节所述的参数设置,在Extended Yale B,AR,Multi-PIE和ORL这4个人脸数据集上对算法的收敛性进行分析。实验结果如图 1所示。从图 1可以看出,通过10次迭代之后,目标函数值几乎不变,即算法收敛。这表明本文方法只需少量的迭代次数,即可收敛。

图 1 收敛性分析
Fig. 1 Convergence analysis((a) Extended Yale B face dataset; (b) AR face dataset; (c) Multi-PIE face dataset; (d) ORL face dataset)

3.5 不同维度下的识别性能比较

采用3.3节所述的参数设置,测试算法在不同维度下的识别性能。实验结果如图 2所示。从图 2可以看出,在4个人脸数据集上,本文方法识别性能优于其他方法,尤其是使用少量特征时,特别是在Extended Yale B和AR人脸数据集上,本文方法识别率远远高于其他方法。这是因为联合学习模型学习的投影方向更具判别性,仅需少量的特征,即可得到较好的识别效果,当特征维数达到一定程度时,各算法的识别率趋于稳定。

图 2 不同维度下的识别性能比较
Fig. 2 Comparison of recognition performance under different dimensions ((a) Extended Yale B face dataset; (b) AR face dataset; (c) Multi-PIE face dataset; (d) ORL face dataset)

3.6 不同监督样本下的识别性能比较

采用3.3节所述的参数设置,测试算法在不同数量监督样本数下的识别性能。实验结果如图 3所示。从图 3可以看出,随着监督样本增加,所有方法的识别性能均提升。对比其他方法,本文方法的识别性能更优,特别是在AR,Multi-PIE和ORL数据集,识别率达到90%以上。这表明JLPDPSCCA算法能够更好地利用监督样本和无标签样本完成识别任务,更加适合半监督的场景。

图 3 不同数量监督样本的识别性能比较
Fig. 3 Comparison of recognition performance of different quantity of supervised samples
((a) Extended Yale B face dataset; (b) AR face dataset; (c) Multi-PIE face dataset; (d) ORL face dataset)

3.7 耗时评估

实验是在MATLAB R2014b上进行的,采用如下的实验平台:CPU为Inter(R) Core(TM) i3-6100,主频3.70 GHz以及内存4.00 GB,测试本文算法的运行时间。运行时间结果如表 1所示。从表 1可以看出,相对CCA,DCCA,ALPCCA,SCCA和LDCCA方法,本文方法的训练时间相对较高,这是因为联合学习模型是需要多次交替迭代。然而,由表 1可知,本文方法测试时间短,这是因为联合学习模型学得的投影更加判别,仅需少量特征即可得到较好的结果,所以测试时间短。值得注意的是,在实际应用中,训练过程可通过离线的方式预先获得学习模型;而算法测试性能的好坏,直接决定了整个学习系统的数据处理能力。

表 1 算法的平均耗时
Table 1 Average time of the algorithms

下载CSV
/s
数据集 CCA DCCA LPbSCCA ALPCCA
训练 测试 训练 测试 训练 测试 训练 测试
Extended Yale B 0.002 7 0.04 0.002 5 0.039 7 2.027 6 0.037 2 0.028 1 0.039 4
AR 0.003 7 0.061 1 0.005 6 0.023 8 47.266 3 0.031 1 0.089 2 0.038
Multi-PIE 0.004 2 0.030 8 .0.006 3 0.022 8 55.317 5 0.027 8 0.089 4 0.037 6
ORL 0.002 0.005 0.002 5 0.004 1 1.442 0.004 6 0.029 5 0.004 3
数据集 LDECCA SCCA LDCCA JLPDPSCCA
训练 测试 训练 测试 训练 测试 训练 测试
Extended Yale B 5.741 7 0.039 9 0.0122 0.042 2 0.017 2 0.024 4 0.326 4 0.033 1
AR 23.406 3 0.028 6 0.007 0.038 1 0.037 9 0.024 3 0.987 3 0.019 1
Multi-PIE 22.174 5 0.312 1 0.007 6 0.341 9 0.029 7 0.025 4 0.784 4 0.016 5
ORL 3.758 2 0.049 0.004 5 0.004 5 0.020 4 0.006 9 0.191 5 0.004
注:加粗字体为每个数据集上训练与测试耗时最短值。

4 结论

现有多视图半监督模型先标签预测后模型构建,导致局部最优投影,影响后续的分类结果。针对两步学习策略的缺陷,本文提出一个标签预测与模型构建融合的联合学习框架。通过将预测标签嵌入定义的判别典型相关分析模型,交替迭代更新预测标签与判别投影方向,同时对预测标签施加约束项,使得预测标签更接近真实标签。在4个人脸数据集上的实验结果表明了本文算法比其他算法有着更高的识别率,证实联合学习框架的有效性。本文模型的不足之处是相似的样本预测标签也应相近,进一步地,可以在现有模型基础上考虑引入拉普拉斯项,来约束相似的样本其预测标签相近。因此,接下来工作重心将是考虑引入拉普拉斯项以及非线性扩展。

参考文献

  • [1] Sun S L, Xie X J, Yang M. Multiview uncorrelated discriminant analysis[J]. IEEE Transactions on Cybernetics, 2016, 46(12): 3272–3284. [DOI:10.1109/TCYB.2015.2502248]
  • [2] Shen X B, Sun Q S. Orthogonal multiset canonical correlation analysis based on fractional-order and its application in multiple feature extraction and recognition[J]. Neural Processing Letters, 2015, 42(2): 301–316. [DOI:10.1007/s11063-014-9358-5]
  • [3] Chen X H, Chen S C, Xue H, et al. A unified dimensionality reduction framework for semi-paired and semi-supervised multi-view data[J]. Pattern Recognition, 2012, 45(5): 2005–2018. [DOI:10.1016/j.patcog.2011.11.008]
  • [4] Hou C P, Zhang C S, Wu Y, et al. Multiple view semi-supervised dimensionality reduction[J]. Pattern Recognition, 2010, 43(3): 720–730. [DOI:10.1016/j.patcog.2009.07.015]
  • [5] Gönen M, Alpaydın E. Multiple kernel learning algorithms[J]. The Journal of Machine Learning Research, 2011, 12: 2211–2268.
  • [6] Cortes C, Mohri M, Rostamizadeh A. Two-stage learning kernel algorithms[C]//Proceedings of the 27th International Conference on Machine Learning. Haifa, Israel: ICML, 2010: 239-246.
  • [7] Zhou Z H, Li M. Semi-supervised learning by disagreement[J]. Knowledge and Information Systems, 2010, 24(3): 415–439. [DOI:10.1007/s10115-009-0209-z]
  • [8] Yu C C, Liu Y, Tan L, et al. Multi-view semi-supervised collaboration classification algorithm with combination of agreement and disagreement label rules[J]. Journal of Computer Applications, 2013, 33(11): 3090–3093. [于重重, 刘宇, 谭励, 等. 组合标记的多视图半监督协同分类算法[J]. 计算机应用, 2013, 33(11): 3090–3093. ] [DOI:10.11772/j.issn.1001-9081.2013.11.3090]
  • [9] Hardoon D R, Szedmak S R, Shawe-Taylor J. Canonical correlation analysis:an overview with application to learning methods[J]. Neural Computation, 2004, 16(12): 2639–2664. [DOI:10.1162/0899766042321814]
  • [10] Hotelling H. Relations between two sets of variates[J]. Biometrika, 1936, 28(3-4): 321–377. [DOI:10.2307/2333955]
  • [11] Zheng W M, Zhou X Y, Zou C R, et al. Facial expression recognition using kernel canonical correlation analysis[J]. IEEE Transactions on Neural Networks, 2006, 17(1): 233–238. [DOI:10.1109/TNN.2005.860849]
  • [12] Nielsen A A. Multiset canonical correlations analysis and multispectral, truly multitemporal remote sensing data[J]. IEEE Transactions on Image Processing, 2002, 11(3): 293–305. [DOI:10.1109/83.988962]
  • [13] Wang F S, Zhang D Q. A new locality-preserving canonical correlation analysis algorithm for multi-view dimensionality reduction[J]. Neural Processing Letters, 2013, 37(2): 135–146. [DOI:10.1007/s11063-012-9238-9]
  • [14] Ko W J, Yu J Y, Chen W Y, et al. Enhanced canonical correlation analysis with local density for cross-domain visual classification[C]//Proceedings of 2017 IEEE International Conference on Acoustics, Speech and Signal Processing. New Orleans, LA, USA: IEEE, 2017: 1757-1761.[DOI:10.1109/ICASSP.2017.7952458]
  • [15] Sun T K, Chen S C, Yang J Y, et al. A novel method of combined feature extraction for recognition[C]//Proceedings of the 8th IEEE International Conference on Data Mining. Pisa, Italy: IEEE, 2008: 1043-1048.[DOI:10.1109/ICDM.2008.28]
  • [16] Peng Y, Zhang D Q, Zhang J C. A new canonical correlation analysis algorithm with local discrimination[J]. Neural Processing Letters, 2010, 31(1): 1–15. [DOI:10.1007/s11063-009-9123-3]
  • [17] Xia J M, Yang J A, Kang K. Canonical correlation analysis based on local sparse representation and linear discriminative analysis[J]. Control and Decision, 2014, 29(7): 1279–1284. [夏建明, 杨俊安, 康凯. 基于局部稀疏表示和线性鉴别分析的典型相关分析[J]. 控制与决策, 2014, 29(7): 1279–1284. ] [DOI:10.13195/j.kzyjc.2013.0444]
  • [18] Shen X B, Sun Q S. A novel semi-supervised canonical correlation analysis and extensions for multi-view dimensionality reduction[J]. Journal of Visual Communication and Image Representation, 2014, 25(8): 1894–1904. [DOI:10.1016/j.jvcir.2014.09.004]
  • [19] Wan J W, Wang H Y, Yang M. Cost sensitive semi-supervised canonical correlation analysis for multi-view dimensionality reduction[J]. Neural Processing Letters, 2017, 45(2): 411–430. [DOI:10.1007/s11063-016-9532-z]
  • [20] Shi J B, Malik J. Normalized cuts and image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(8): 888–905. [DOI:10.1109/34.868688]
  • [21] Von Luxburg U. A tutorial on spectral clustering[J]. Statistics and Computing, 2007, 17(4): 395–416. [DOI:10.1007/s11222-007-9033-z]
  • [22] Yang Y, Shen H T, Nie F P, et al. Nonnegative spectral clustering with discriminative regularization[C]//Proceedings of the 25th AAAI Conference on Artificial Intelligence. San Francisco, California: ACM, 2011: 555-560.
  • [23] Yang Y, Yang Y, Shen H T, et al. Discriminative nonnegative spectral clustering with out-of-sample extension[J]. IEEE Transactions on Knowledge and Data Engineering, 2013, 25(8): 1760–1771. [DOI:10.1109/TKDE.2012.118]
  • [24] Lee D D, Seung H S. Algorithms for non-negative matrix factorization[C]//Advances in Neural Information Processing Systems. British Columbia, Canada: MIT Press, 2001: 556-562.
  • [25] Kursun O, Alpaydin E. Canonical correlation analysis for multiview semisupervised feature extraction[M]//Rutkowski L, Scherer R, Tadeusiewicz R, et al. Artificial Intelligence and Soft Computing. Heidelberg: Springer, 2010: 430-436.[DOI:10.1007/978-3-642-13208-7_54]
  • [26] Peng Y, Zhang D Q. Semi-supervised canonical correlation analysis algorithm[J]. Journal of Software, 2008, 19(11): 2822–2832. [彭岩, 张道强. 半监督典型相关分析算法[J]. 软件学报, 2008, 19(11): 2822–2832. ]