发布时间: 2018-08-16 摘要点击次数: 全文下载次数: DOI: 10.11834/jig.170506 2018 | Volume 23 | Number 8 图像分析和识别

 收稿日期: 2017-09-15; 修回日期: 2018-03-05 基金项目: 国家自然科学基金项目（61473148，GGA1513701）；国家科技部重大研发计划（2017YFF0107304） 第一作者简介: 李开宇, 1969年生, 男, 副教授, 博士, 主要研究方向为模式识别, 信号处理。E-mail: LKY_401@nuaa.edu.cn;崔益峰, 男, 硕士研究生, 主要研究方向为人脸识别。E-mail: 1055319134@qq.com;王平, 男, 教授, 主要研究方向为无损检测技术, 图像识别。E-mail: zeit@263.net;徐贵力, 男, 教授, 主要研究方向为车辆识别, 图像识别。E-mail: guilixu@nuaa.edu.cn. 中图法分类号: TP391 文献标识码: A 文章编号: 1006-8961(2018)08-1154-09

# 关键词

Structured low-rank dictionary learning for face recognition
Li Kaiyu, Hu Yan, Cui Yifeng, Wang Ping, Xu Guili
Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Supported by: National Natural Science Foundation of China(61473148, GGA1513701)

# Abstract

Objective Face images collected from real people are usually influenced by environmental factors, such as illumination and occlusion. In this situation, face images from the same class have varying degrees of otherness, and face images from different classes have distinct degrees of similarity, which can greatly affect the accuracy of face recognition. To address these problems, a face recognition algorithm for discerning structured low-rank dictionary learning is put forward and it is based on the theory of low-rank matrix recovery. Method The proposed algorithm adds low-rank regularization and structured sparse to discern dictionary learning based on the label information of training samples. During dictionary learning, the proposed algorithm first adopts the reconstruction error of training samples to constrain the relationships between training samples and the dictionary. The algorithm then applies Fisher discrimination criterion to the coding coefficients of dictionary learning for the coding coefficients to maintain discrimination. The proposed algorithm also applies low-rank regularization to the dictionary on the basis of the theory of low-rank matrix recovery because the noise in the training samples can influence the discrimination of the dictionary. During dictionary learning, structured sparse is imposed to avoid losing structure information and guarantee optimal classification of samples. Finally, test samples can be classified on the basis of reconstruction error. Result Experiments regarding the proposed algorithm are performed on the AR and ORL face databases. In the AR face database, to analyze the effects of experimental results from the different dimensions of samples, training samples include six images in the first session, that is, one scarf occlusion image, two sunglasses occlusion images, and three facial expression change and illumination change images per person. Test samples are the same as training samples. Face recognition is higher as face image dimension is higher in any method. Comparing the face recognition rate of sparse representation based on classification (SRC) algorithm with that of discriminative KSVD (DKSVD) algorithm, DKSVD algorithm reduces the effects of recognition results from uncertain factors in training samples by dictionary learning. Comparing the face recognition rate of discriminative low-rank dictionary learning for sparse representation(DLRD_SR) algorithm with that of Fisher discriminative dictionary learning (FDDL) algorithm, the low-rank regularization of dictionary can improve the face recognition rate by at least 5.8% when images show noise information such as occlusion. Comparing the face recognition rate of the proposed algorithm with that of DLRD_SR algorithm, face recognition rate can be improved noticeably when Fisher discrimination criterion is imposed to dictionary learning, and the ideal sparse values guarantee the optimal classification of test samples. The face recognition rate of images of 500 dimensions, in which a part of images is occluded with scarf or sunglasses, is 85.2%. In the AR face database, the occlusion degrees using sunglasses and scarf can be regarded as 20% and 40% of the face image, respectively. To verify the validity of the proposed algorithm in different facial expression and illumination changes and with scarf and sunglasses occlusion, experiments are performed according to specific image combinations of training samples. In any image combination, the proposed algorithm exhibits prominent superiority in face recognition when the face images are occluded. In training samples of images containing only facial expression and illumination changes and sunglasses occlusion, the recognition rate of the proposed algorithm is higher than that of other algorithms by at least 2.7%. In training samples of images with only facial expression and illumination changes and scarf occlusion, the recognition rate of the proposed algorithm is higher than that of other algorithms by at least 3.6%. In training samples of images showing facial expression and illumination changes and sunglasses and scarf occlusions, the recognition rate of the proposed algorithm is higher than that of other algorithms by at least 1.9%.In the ORL face database, the face recognition rate of images without occlusion is 95.2%, which is slightly lower than the recognition rate of FDDL algorithm. When the degree of random block occlusion of face images increases up to 20%, the face recognition rate of the proposed algorithm is higher than SRC, DKSVD, FDDL, and DLRD_SR algorithms. When the degree of random block occlusion of face images increases up to 50%, the face recognition rates of the aforementioned algorithms are all low, whereas that of the proposed algorithm remains the highest. Conclusion The proposed algorithm features certain robustness when face images are influenced by different factors, such as occlusion. Results also show that the proposed algorithm possesses feasibility for face recognition.

# Key words

face recognition; low-rank regularization; label information; structured sparse; Fisher discrimination criterion; dictionary learning

# 1.1 FDDL算法模型

 ${J_{\left( {\mathit{\boldsymbol{D}},\mathit{\boldsymbol{X}}} \right)}} = \mathop {\arg \min }\limits_{\left( {\mathit{\boldsymbol{D}},\mathit{\boldsymbol{X}}} \right)} \left( \begin{array}{l} \sum\limits_{i = 1}^c {r\left( {{\mathit{\boldsymbol{Y}}_i},\mathit{\boldsymbol{D}},{\mathit{\boldsymbol{X}}_i}} \right)} + \\ {\lambda _1}{\left\| \mathit{\boldsymbol{X}} \right\|_1} + {\lambda _2}F\left( \mathit{\boldsymbol{X}} \right) \end{array} \right)$ (1)

# 1.2 具有识别力的重建误差

 $F\left( \mathit{\boldsymbol{X}} \right) = {\rm{tr}}\left( {{S_{\rm{w}}}\left( \mathit{\boldsymbol{X}} \right)} \right) - {\rm{tr}}\left( {{S_{\rm{b}}}\left( \mathit{\boldsymbol{X}} \right)} \right) + \eta \left\| \mathit{\boldsymbol{X}} \right\|_{\rm{F}}^2$ (5)

# 2.1 问题阐述

 ${\mathit{\boldsymbol{X}}^*} = \left( {\begin{array}{*{20}{c}} {X_1^*}&0& \cdots &0&0\\ 0&{X_2^*}& \cdots &0&0\\ \vdots&\vdots &{}& \vdots&\vdots \\ 0&0& \cdots &{X_{c - 1}^*}&0\\ 0&0& \cdots &0&{X_c^*} \end{array}} \right)$ (6)

# 2.2.1 结构化低秩字典模型设计

FDDL算法无法处理噪声信息较大的训练样本，所以本文提出一种结构化低秩字典学习的算法。该算法在FDDL算法的基础上将低秩正则化引入到字典学习中，同时为了能够保证对样本的最优分类，引入了理想稀疏值 ${\mathit{\boldsymbol{Q}}}$ 。假设现有样本$\mathit{\boldsymbol{Y}} = \left[ {{\mathit{\boldsymbol{y}}_1}, {\mathit{\boldsymbol{y}}_2}, {\mathit{\boldsymbol{y}}_3}, {\mathit{\boldsymbol{y}}_4}} \right]$，其中 ${\mathit{\boldsymbol{y}}_1}$ ${\mathit{\boldsymbol{y}}_2}$ 属于第1类， ${\mathit{\boldsymbol{y}}_3}$ 属于第2类， ${\mathit{\boldsymbol{y}}_4}$ 属于第3类，字典$\mathit{\boldsymbol{D}} = \left[ {{\mathit{\boldsymbol{D}}_1}, {\mathit{\boldsymbol{D}}_2}, {\mathit{\boldsymbol{D}}_3}} \right]$, 其中${\mathit{\boldsymbol{D}}_1} = \left[ {{\mathit{\boldsymbol{d}}_1}, {\mathit{\boldsymbol{d}}_2}} \right], {\mathit{\boldsymbol{D}}_2} = {\mathit{\boldsymbol{d}}_3}, {\mathit{\boldsymbol{D}}_3} = {\mathit{\boldsymbol{d}}_4}$，那么样本 $\mathit{\boldsymbol{Y}}$ 在字典 $\mathit{\boldsymbol{D}}$ 上的理想稀疏系数 $\mathit{\boldsymbol{Q}}$

 $\mathit{\boldsymbol{Q}} = \left[ {{\mathit{\boldsymbol{q}}_1},{\mathit{\boldsymbol{q}}_2},{\mathit{\boldsymbol{q}}_3},{\mathit{\boldsymbol{q}}_4}} \right] = \left[ {\begin{array}{*{20}{c}} 1&1&0&0\\ 1&1&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{array}} \right]$ (7)

 ${J_{\left( {\mathit{\boldsymbol{D}},\mathit{\boldsymbol{X}}} \right)}} = \arg \min \left( \begin{array}{l} \sum\limits_{i = 1}^c {r\left( {{\mathit{\boldsymbol{Y}}_i},\mathit{\boldsymbol{D}},{\mathit{\boldsymbol{X}}_i}} \right)} + \\ {\lambda _1}{\left\| \mathit{\boldsymbol{X}} \right\|_1} + {\lambda _2}F\left( \mathit{\boldsymbol{X}} \right) + \\ H\left( {\mathit{\boldsymbol{D}},\mathit{\boldsymbol{X}},\mathit{\boldsymbol{Q}}} \right) \end{array} \right)$ (8)

# 2.2.2 字典低秩化与结构化表示

 $H\left( {\mathit{\boldsymbol{D}},\mathit{\boldsymbol{X}},\mathit{\boldsymbol{Q}}} \right) = \alpha \sum\limits_{i = 1}^c {{{\left\| {{\mathit{\boldsymbol{D}}_i}} \right\|}_*}} + \beta \left\| {\mathit{\boldsymbol{X}} - \mathit{\boldsymbol{Q}}} \right\|_{\rm{F}}^2$ (9)

# 2.2.3 结构化低秩字典优化算法的实现

1) 更新编码系数${\mathit{\boldsymbol{X}}_i}\left( {i = 1, 2, \cdots , c} \right)$，更新过程中固定字典 $\mathit{\boldsymbol{D}}$ 和所有的系数 $\mathit{\boldsymbol{X}}_j$($i$$j)，最后将所有更新过的系数 \mathit{\boldsymbol{X}}_i (i=1, 2, …, c)整合成编码系数 \mathit{\boldsymbol{X}} 2) 更新子字典 \mathit{\boldsymbol{D}}_i (i=1, 2, …, c)，更新过程中固定 \mathit{\boldsymbol{D}}_j (j$$i$)，由于在更新 $\mathit{\boldsymbol{D}}_i$ 的过程中，样本 $\mathit{\boldsymbol{Y}}_i$ $\mathit{\boldsymbol{D}}_i$ 上对应的编码系数 $\mathit{\boldsymbol{X}}_i^{i}$ 也同时更新，因此同时固定除系数 $\mathit{\boldsymbol{X}}_i^{i}$ 以外的其余系数。在初始化字典 $\mathit{\boldsymbol{D}}$ 后，按照此步骤不断地迭代更新系数 $\mathit{\boldsymbol{X}}$ 和字典 $\mathit{\boldsymbol{D}}$ ，直到满足停止标准后停止更新。最后，在学习低秩结构化字典的基础上对测试样本 $\mathit{\boldsymbol{y}}$ 进行分类。具体步骤如下：

1) 初始化字典 $\mathit{\boldsymbol{D}}$ , 训练样本 $\mathit{\boldsymbol{Y}}_i$ 对应的特征向量作为初始化子字典 $\mathit{\boldsymbol{D}}_i$ 的原子；

2) 更新编码系数 $\mathit{\boldsymbol{X}}_i$ ($i$=1, 2, …, $c$), 保持字典 $\mathit{\boldsymbol{D}}$ 和所有的系数 $\mathit{\boldsymbol{X}}_j$ ($j$$i)不变，按类别顺序更新编码系数，将公式(8)简化为系数编码问题, 即  {J_{\left( {{\mathit{\boldsymbol{X}}_i}} \right)}} = \arg \mathop {\min }\limits_{{\mathit{\boldsymbol{X}}_i}} \left( \begin{array}{l} \left\| {{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{D}}_i}\mathit{\boldsymbol{X}}_i^i} \right\|_{\rm{F}}^2 + \left\| {{\mathit{\boldsymbol{Y}}_i} - \mathit{\boldsymbol{D}}{\mathit{\boldsymbol{X}}_i}} \right\|_{\rm{F}}^2 + \\ \sum\limits_{j = 1,j \ne i}^c {\left\| {{\mathit{\boldsymbol{D}}_j}\mathit{\boldsymbol{X}}_i^j} \right\|_{\rm{F}}^2} + {\lambda _1}{\left\| {{\mathit{\boldsymbol{X}}_i}} \right\|_1} + \\ {\lambda _2}{F_i}\left( {{X_i}} \right) + \beta \left\| {{\mathit{\boldsymbol{X}}_i} - {\mathit{\boldsymbol{Q}}_i}} \right\|_{\rm{F}}^2 \end{array} \right) (10) 式(10)可由IPM[19]方法求得。 3) 更新子字典 \mathit{\boldsymbol{D}}_i (i=1, 2, …, c)：保持 \mathit{\boldsymbol{D}}_j (j$$i$)和除系数 $\mathit{\boldsymbol{X}}_i^{i}$ 以外的其余系数 $\mathit{\boldsymbol{X}}$ 不变，将式(8)简化为子字典问题，即

 ${J_{\left( {{\mathit{\boldsymbol{D}}_i}} \right)}} = \arg \min \left( \begin{array}{l} \left\| {{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{D}}_i}\mathit{\boldsymbol{X}}_i^i - \sum\limits_{j = 1,j \ne i}^c {{\mathit{\boldsymbol{D}}_j}\mathit{\boldsymbol{X}}_i^j} } \right\|_{\rm{F}}^2 + \\ \left\| {{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{D}}_i}\mathit{\boldsymbol{X}}_i^i} \right\|_{\rm{F}}^2 + \\ \sum\limits_{j = 1,j \ne i}^c {\left\| {{\mathit{\boldsymbol{D}}_j}\mathit{\boldsymbol{X}}_i^j} \right\|_{\rm{F}}^2} + \alpha {\left\| {{\mathit{\boldsymbol{D}}_i}} \right\|_*} \end{array} \right)$ (11)

$S$($\mathit{\boldsymbol{D}}_i$)为

 $S\left( {{\mathit{\boldsymbol{D}}_i}} \right) = \left\| {{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{D}}_i}\mathit{\boldsymbol{X}}_i^i - \sum\limits_{j = 1,j \ne i}^c {{\mathit{\boldsymbol{D}}_j}\mathit{\boldsymbol{X}}_i^j} } \right\|_{\rm{F}}^2 + \left\| {{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{D}}_i}\mathit{\boldsymbol{X}}_i^i} \right\|_{\rm{F}}^2$

 $\begin{array}{*{20}{c}} {\mathop {\min }\limits_{{\mathit{\boldsymbol{D}}_i},\mathit{\boldsymbol{X}}_i^i,{\mathit{\boldsymbol{E}}_i}} {{\left\| {\mathit{\boldsymbol{X}}_i^i} \right\|}_1} + \alpha {{\left\| {{\mathit{\boldsymbol{D}}_i}} \right\|}_*} + \gamma {{\left\| {{\mathit{\boldsymbol{E}}_i}} \right\|}_{2,1}} + \lambda S\left( {{\mathit{\boldsymbol{D}}_i}} \right)}\\ {{\rm{s}}.\;{\rm{t}}.\;\;\;{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{D}}_i}\mathit{\boldsymbol{X}}_i^i + {\mathit{\boldsymbol{E}}_i}} \end{array}$ (12)

 $\begin{array}{*{20}{c}} {\mathop {\min }\limits_{{\mathit{\boldsymbol{D}}_i},{\mathit{\boldsymbol{E}}_i},\mathit{\boldsymbol{X}}_i^i} {{\left\| \mathit{\boldsymbol{Z}} \right\|}_1} + \alpha {{\left\| \mathit{\boldsymbol{J}} \right\|}_*} + \beta {{\left\| {{\mathit{\boldsymbol{E}}_i}} \right\|}_{2,1}} + \lambda S\left( {{\mathit{\boldsymbol{D}}_i}} \right) + }\\ {{\rm{tr}}\left[ {T_1^t\left( {{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{D}}_i}\mathit{\boldsymbol{X}}_i^i - {\mathit{\boldsymbol{E}}_i}} \right)} \right] + }\\ {{\rm{tr}}\left[ {T_2^t\left( {{\mathit{\boldsymbol{D}}_i} - \mathit{\boldsymbol{J}}} \right)} \right] + {\rm{tr}}\left( {T_3^t\left( {\mathit{\boldsymbol{X}}_i^i - \mathit{\boldsymbol{Z}}} \right)} \right) + }\\ {\frac{\mu }{2}\left( {\left\| {{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{D}}_i}\mathit{\boldsymbol{X}}_i^i + {\mathit{\boldsymbol{E}}_i}} \right\|_{\rm{F}}^2} \right) + }\\ {\left\| {{\mathit{\boldsymbol{D}}_i} - \mathit{\boldsymbol{J}}} \right\|_{\rm{F}}^2 + \left\| {\mathit{\boldsymbol{X}}_i^i - \mathit{\boldsymbol{Z}}} \right\|_{\rm{F}}^2} \end{array}$ (13)

(1) 固定其他变量，更新 $\mathit{\boldsymbol{Z}}$

 $\mathit{\boldsymbol{Z}} = \arg \mathop {\min }\limits_\mathit{\boldsymbol{Z}} \left\{ {\frac{1}{\mu }{{\left\| \mathit{\boldsymbol{Z}} \right\|}_1} + \frac{1}{2}\left\| {\mathit{\boldsymbol{Z}} - \left( {\mathit{\boldsymbol{X}}_i^i + \frac{{{T_3}}}{\mu }} \right)} \right\|_{\rm{F}}^2} \right\}$

(2) 固定其他变量更新 $\mathit{\boldsymbol{X}}_i^{i}$

 $\begin{array}{*{20}{c}} {\mathit{\boldsymbol{X}}_i^i = \left( {\mathit{\boldsymbol{D}}_i^t{\mathit{\boldsymbol{D}}_i} + } \right.}\\ {{{\left. \mathit{\boldsymbol{I}} \right)}^{ - 1}}\left( {\mathit{\boldsymbol{D}}_i^t\left( {{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{E}}_i}} \right) + \mathit{\boldsymbol{Z}} + \frac{{\mathit{\boldsymbol{D}}_i^t{T_1} - {T_3}}}{\mu }} \right)} \end{array}$

(3) 固定其他变量更新 $\mathit{\boldsymbol{J}}$ ，同时归一化 $\mathit{\boldsymbol{J}}$ 的每列

 $\mathit{\boldsymbol{J}} = \arg \mathop {\min }\limits_\mathit{\boldsymbol{J}} \left\{ {\frac{\alpha }{\mu }{{\left\| \mathit{\boldsymbol{J}} \right\|}_ * } + \frac{1}{2}\left\| {\mathit{\boldsymbol{J}} - \left( {{\mathit{\boldsymbol{D}}_i} + \frac{{{T_2}}}{\mu }} \right)} \right\|_{\rm{F}}^2} \right\}$

(4) 固定其他变量更新 $\mathit{\boldsymbol{D}}_i$ ，同时归一化 $\mathit{\boldsymbol{D}}_i$ 的每列

 $\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{D}}_i} = \left\{ {\frac{{2\lambda }}{\mu }\left[ {{\mathit{\boldsymbol{Y}}_i}\mathit{\boldsymbol{X}}_i^{it} + \left( {\sum\limits_{j = 1,j \ne i}^c {{\mathit{\boldsymbol{D}}_j}\mathit{\boldsymbol{X}}_i^j} } \right)\mathit{\boldsymbol{X}}_i^{it}} \right] + } \right.}\\ {\left. {{\mathit{\boldsymbol{Y}}_i}\mathit{\boldsymbol{X}}_i^{it} - {\mathit{\boldsymbol{E}}_i}\mathit{\boldsymbol{X}}_i^{it} + \mathit{\boldsymbol{J}} + \frac{{{T_1}\mathit{\boldsymbol{X}}_i^{it} - {T_2}}}{\mu }} \right\} \times }\\ {{{\left( {2\left( {\frac{\lambda }{\mu } + 1} \right)\mathit{\boldsymbol{X}}_i^i\mathit{\boldsymbol{X}}_i^{it} + I} \right)}^{ - 1}}} \end{array}$

(5) 固定其他变量更新 $\mathit{\boldsymbol{E}}_i$

 $\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{E}}_i} = }\\ {\arg \mathop {\min }\limits_{{\mathit{\boldsymbol{E}}_i}} \left\{ {\frac{\beta }{\mu }{{\left\| {{\mathit{\boldsymbol{E}}_i}} \right\|}_{2,1}} + \frac{1}{2}\left\| {{\mathit{\boldsymbol{E}}_i} - \left( {{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{D}}_i}\mathit{\boldsymbol{X}}_i^i + \frac{{{T_1}}}{\mu }} \right)} \right\|_{\rm{F}}^2} \right\}} \end{array}$

(6) 更新 $T$1 $T$2 $T$3

 ${T_1} = {T_1} + \mu \left( {{\mathit{\boldsymbol{Y}}_i} - {\mathit{\boldsymbol{D}}_i}\mathit{\boldsymbol{X}}_i^i - {\mathit{\boldsymbol{E}}_i}} \right)$

 ${T_2} = {T_2} + \mu \left( {{\mathit{\boldsymbol{D}}_i} - \mathit{\boldsymbol{J}}} \right)$

 ${T_3} = {T_3} + \mu \left( {\mathit{\boldsymbol{X}}_i^i - \mathit{\boldsymbol{Z}}} \right)$

(7) 更新 $\mu$

 $\mu = \min \left( {\rho \mu ,{\mu _{\max }}} \right)$

4) 迭代运算，观察经过第2)3)步骤的迭代后 $J$($\mathit{\boldsymbol{D}}$, $\mathit{\boldsymbol{X}}$)的值，如果其值大于等于设定的阈值或者到达最大迭代次数则直接输出稀疏编码 $\mathit{\boldsymbol{X}}$ 和字典 $\mathit{\boldsymbol{D}}$ ，否则继续执行步骤2)3)两步。

5) 分类：给定测试样本 $\mathit{\boldsymbol{y}}$ ，其在结构化低秩字典 $\mathit{\boldsymbol{D}}$ 上的编码系数为

 $\mathit{\boldsymbol{x}} = \arg \mathop {\min }\limits_\mathit{\boldsymbol{x}} \left\{ {\left\| {\mathit{\boldsymbol{y}} - \mathit{\boldsymbol{Dx}}} \right\|_2^2 + \varepsilon {{\left\| \mathit{\boldsymbol{x}} \right\|}_1}} \right\}$ (14)

 ${e_i} = \left\| {\mathit{\boldsymbol{y}} - {\mathit{\boldsymbol{D}}_i}{\mathit{\boldsymbol{x}}_i}} \right\|_2^2 + \omega \left\| {\mathit{\boldsymbol{x}} - {{\mathit{\boldsymbol{\bar x}}}_i}} \right\|_2^2$ (15)

 $identity\left( \mathit{\boldsymbol{y}} \right) = \arg \mathop {\min }\limits_i \left\{ {{e_i}} \right\}$ (16)

# 3.1 AR数据库

AR人脸数据库拥有126个人的图像，其总图像量超过4 000幅。每个人的图像由两个不同时期拍摄得到，每个时期包括13幅图像，其中7幅脸部表情及光照改变(未被遮挡)，3幅墨镜遮挡和3幅围巾遮挡。图 1分别表示AR数据库中部分测试图像原图、低秩分解后的低秩图和稀疏噪声图。

Table 1 Recognition rate of different methods under the first condition of sample combination

 算法 SRC DKSVD FDDL DLRD_SR 本文 识别率/% 76.2 81.7 85.9 89.4 92.1

Table 3 Recognition rate of different methods under the thirdly condition of sample combination

 算法 SRC DKSVD FDDL DLRD_SR 本文 识别率/% 73.8 77.9 82.1 86.1 88.0

Table 2 Recognition rate of different methods under the second condition of sample combination

 算法 SRC DKSVD FDDL DLRD_SR 本文 识别率/% 75.5 79.2 83.6 87.9 91.5

Table 4 Recognition rate with different dimensions of five methods

 /% 算法 维度 44 136 255 500 SRC 55.3 66.3 70 71.8 DKSVD 55 71.7 77.8 79.7 FDDL 55.8 65.1 73.9 75.5 DLRD_SR 65.1 70.9 80.5 82.7 本文 70.1 74.3 83.4 85.2

# 3.2 ORL数据库

ORL人脸数据库拥有40个人在不同时刻下光照的变化、脸部表情以及脸部细节变化的图像，共400幅。实验中从每人的10幅图像中随机挑选5幅图像作为训练图像，剩下的5幅图像作为测试图像。在每幅图像上随机遮挡一部分，图 2表示ORL人脸数据库中被随机遮挡10%时的部分训练图像和测试图像。表 5表示在不同程度随机遮挡下的图像识别率，同时将本文提出的方法与SRC算法、DKSVD算法、FDDL算法以及DLRD_SR算法相比较。

Table 5 Recognition rate with different level of occlusions on the ORL database

 算法 遮挡程度/% 0 10 20 30 40 50 SRC 92 78.6 64.1 53.7 37.9 28.3 DKSVD 88.5 81.3 72.8 61.3 45.3 35.6 FDDL 96.6 86.4 75.5 63.1 49.2 37.1 DLRD_SR 92.1 90.7 81.8 75.9 63.2 57.4 本文 95.2 91.9 83.7 78.5 69.4 60.4

# 参考文献

• [1] Wu W, Li J H. Research on face recognition based on PCA and LDA[J]. Science and Technology Information, 2008(36): 465–466. [伍威, 李晋惠. 基于PCA和LDA的人脸识别技术的研究[J]. 科技信息, 2008(36): 465–466. ]
• [2] Fernandes S, Bala J. Performance analysis of PCA-based and LDA-based algorithms for face recognition[J]. International Journal of Signal Processing Systems, 2013, 1(1): 1–6. [DOI:10.12720/ijsps]
• [3] Wright J, Yang A Y, Ganesh A, et al. Robust face recognition via sparse representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(2): 210–227. [DOI:10.1109/TPAMI.2008.79]
• [4] Lian Q S, Shi B S, Chen S Z. Research advances on dictionary learning models, algorithms and applications[J]. Acta Automatica Sinica, 2015, 41(2): 240–260. [练秋生, 石保顺, 陈书贞. 字典学习模型、算法及其应用研究进展[J]. 自动化学报, 2015, 41(2): 240–260. ]
• [5] Elad M, Aharon M. Image denoisingvia sparse and redundant representations over learned dictionaries[J]. IEEE Transactions on Image Processing, 2006, 15(12): 3736–3745. [DOI:10.1109/TIP.2006.881969]
• [6] Yang M, Zhang L. Gabor feature based sparse representation for face recognition with Gabor occlusion dictionary[C]//Proceedings of DaniilidisK, Maragos P, ParagiosN. Computer Vision-ECCV 2010. Berlin: Springer-Verlag, 2010: 448-461.
• [7] Kuo H J, Liu Y C, Cheng Y C. Image processing system and method of improving human face recognition: US, 9133526 B2[P]. 2016-04-12.
• [8] Ramirez I, Sprechmann P, Sapiro G. Classification and clustering via dictionary learning with structured incoherence and shared features[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, CA: Institute of Electrical and Electronic Engineers, 2010: 3501-3508.
• [9] Wang J, Cai J F, Shi Y H, et al. Incoherent dictionary learning for sparse representation based image denoising[C]//Proceedings of IEEE International Conference on Image Processing. Paris, France: Institute of Electrical and Electronic Engineers, 2014: 4582-4586.
• [10] Jiang H L. Face recognition algorithm based onsubspace analysis[J]. Computer Systems & Applications, 2017, 26(2): 151–157. [江华丽. 基于子空间分析的人脸识别算法[J]. 计算机系统应用, 2017, 26(2): 151–157. ]
• [11] Aharon M, Elad M, Bruckstein A. rmK-SVD:an algorithm for designing overcompletedictionaries for sparse representation[J]. IEEE Transactions on Signal Processing, 2006, 54(11): 4311–4322. [DOI:10.1109/TSP.2006.881199]
• [12] Zhang Q, Li B X. Discriminative K-SVD for dictionary learning in face recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, CA: Institute of Electrical and Electronic Engineers, 2010: 2691-2698.
• [13] Jiang Z L, Lin Z, Davis L S. Learning a discriminative dictionary for sparse coding via label consistent K-SVD[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI: IEEE, 2011: 1697-1704.
• [14] Yang M, Zhang L, Feng X C, et al. Fisher discrimination dictionary learning for sparse representation[C]//Proceedings of IEEE International Conference on Computer Vision. Barcelona, Spain: Institute of Electrical and Electronic Engineers, 2011: 543-550.
• [15] Chen X Y, Wang C H. Characterized dictionary-based low-rank representation for face recognition[J]. Journal of Computer Applications, 2016, 36(12): 3423–3428. [程晓雅, 王春红. 基于特征化字典的低秩表示人脸识别[J]. 计算机应用, 2016, 36(12): 3423–3428. ] [DOI:10.11772/j.issn.1001-9081.2016.12.3423]
• [16] Ma L, Wang C H, Xiao B H, et al. Sparse representation for face recognition based on discriminative low-rank dictionary learning[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI: Institute of Electrical and Electronic Engineers, 2012: 2586-2593.
• [17] Zhang H X, Zheng Z L, Jia J, et al. Low-rank matrix recovery based on Fisher discriminant Criterion[J]. PR & AI, 2015, 28(7): 651–656. [张海新, 郑忠龙, 贾泂, 等. 基于Fisher判别准则的低秩矩阵恢复[J]. 模式识别与人工智能, 2015, 28(7): 651–656. ]
• [18] Zhang Y M Z, Jiang Z L, Davis L S. Learning structured low-rank representations for image classification[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR: Institute of Electrical and Electronic Engineers, 2013: 676-683.
• [19] Rosasco L, Verri A, Santoro M, et al. Iterative projection methods for structured sparsity regularization: MIT-CSAIL-TR-2009-050, CBCL-282[R]. Cambridge, MA: Massachusettes Institute of Technology, 2009.
• [20] Bertsekas D P. Constrained Optimization and Lagrange Multiplier Methods[M]. New York: Academic Press, 1982.
• [21] Martínez A, Benavente R. The AR face database: CVC techrep #24[R]. Bellaterra, Barcelona City: Computer Vision Center, 1998.
• [22] Samaria F S, Harter A C. Parameterisation of a stochastic model for human face identification[C]//Proceedings of the 2nd IEEE Workshop on Applications of Computer Vision. Sarasota, FL, USA: Institute of Electrical and Electronic Engineers, 1994: 138-142.