Current Issue Cover
考虑双目竞争视觉现象的非对称失真立体图像质量评价方法

唐祎玲, 江顺亮, 徐少平, 肖建, 陈晓军(南昌大学数学与计算机学院, 南昌 330031)

摘 要
目的 现有方法存在特征提取时间过长、非对称失真图像预测准确性不高的问题,同时少有工作对非对称失真与对称失真立体图像的分类进行研究,为此提出了基于双目竞争的非对称失真立体图像质量评价方法。方法 依据双目竞争的视觉现象,利用非对称失真立体图像两个视点的图像质量衰减程度的不同,生成单目图像特征的融合系数,融合从左右视点图像中提取的灰度空间特征与HSV (hue-saturation-value)彩色空间特征。同时,量化两个视点图像在结构、信息量和质量衰减程度等多方面的差异,获得双目差异特征。并且将双目融合特征与双目差异特征级联为一个描述能力更强的立体图像质量感知特征向量,训练基于支持向量回归的特征—质量映射模型。此外,还利用双目差异特征训练基于支持向量分类模型的对称失真与非对称失真立体图像分类模型。结果 本文提出的质量预测模型在4个数据库上的SROCC (Spearman rank order correlation coefficient)和PLCC (Pearson linear correlation coefficient)均达到0.95以上,在3个非对称失真数据库上的均方根误差(root of mean square error,RMSE)取值均优于对比算法。在LIVE-II(LIVE 3D image quality database phase II)、IVC-I(Waterloo-IVC 3D image qualityassessment database phase I)和IVC-II (Waterloo-IVC 3D image quality assessment database phase II)这3个非对称失真立体图像测试数据库上的失真类型分类测试中,对称失真立体图像的分类准确率分别为89.91%、94.76%和98.97%,非对称失真立体图像的分类准确率分别为95.46%,92.64%和96.22%。结论 本文方法依据双目竞争的视觉现象融合左右视点图像的质量感知特征用于立体图像质量预测,能够提升非对称失真立体图像的评价准确性和鲁棒性。所提取双目差异性特征还能够用于将对称失真与非对称失真立体图像进行有效分类,分类准确性高。
关键词
Binocular rivalry-based stereoscopic images quality assessment relevant to its asymmetric and distorted contexts

Tang Yiling, Jiang Shunliang, Xu Shaoping, Xiao Jian, Chen Xiaojun(School of Mathematics and Computer Sciences, Nanchang University, Nanchang 330031, China)

Abstract
Objective Computer vision-related stereoscopic image quality assessment(SIQA) is focused on recently. It is essential for parameter setting and system optimizing for such domains of multiple stereoscopic image applications like image storage,compression,transmission,and display. Stereoscopic images can be segmented into two sorts of distorted images:symmetrically and asymmetrically distorted,in terms of the degree of degradation between the left and right views. For symmetric-based distorted stereoscopic images,the distortion type and degree occurred in the left and right views are basically in consistency. Early SIQA methods were effective in evaluating symmetrically distorted images by averaging scores or features derived from the two views. However,in practice,the stereoscopic images are often asymmetrically distorted,where the distortion type and level of the two views are different. Simply averaging the quality values of the two views cannot accurately simulate the binocular fusion process and the binocular rivalry phenomena in relevance to the human visual system. Consequently,the evaluation accuracy of these methods will be down to severe lower when the quality of asymmetrically distorted stereoscopic images is estimated. Previous studies have shown that when the left and right views of a stereoscopic image exhibit varying levels or types of distortion,binocular rivalry is primarily driven by one of the views. Specially,in the process of evaluating the quality of a stereoscopic image,the visual quality of one view has a greater impact on the stereopair quality evaluation than the other view. To address this issue,some methods have simulated the binocular rivalry phenomenon in human visual system,and used a weighted average method to fuse the visual information in the two views of stereo-pairs as well. However,existing methods are still challenged for its lower prediction accuracy of asymmetrically distorted images,and its feature extraction process is also time-consuming. To optimize the evaluation accuracy of asymmetrically distorted images,we develop a binocular rivalry-based no-reference SIQA method. Method Multiple information-contained is used to generate image quality degradation coefficients in the two views,which can describe the degradation level of the distorted images accurately. According to the binocular rivalry phenomena in human visual system,the image quality degradation coefficients are used to generate fusion coefficients,which can be used to fuse the views-derived monocular features,including gray-scale features and HSV color space-extracted statistics. Since the human visual system is sensitive to structural information,the binocular structural similarity map(BSSIM) is constructed to measure the structural difference between the left and right views. As one part of the binocular difference features,structural difference features are extracted from the BSSIM. To quantify the differences between the left and right views,other related binocular difference features like entropy difference and degradation difference are obtained further. Finally,the binocular fusion features and the binocular difference features are concatenated into a more descriptive quality-aware feature vector,and a support vector regression model is trained to map the feature vector to the perception quality. In addition,to classify the symmetrically distorted stereoscopic images and the asymmetrically distorted stereoscopic images,a support vector classification model is also trained using the binocular difference features. Result To verify the performance of the proposed SIQA method,4 sorts of publicly benchmark stereoscopic image databases are employed in relevance to the symmetrically and asymmetrically distorted stereoscopic images-involved LIVE 3D IQA Database Phase II(LIVE-II), Waterloo-IVC 3D IQA Database Phase I(IVC-I),and Waterloo-IVC 3D IQA Database Phase II(IVC-II). Symmetrically distorted stereoscopic images are only involved in the LIVE 3D IQA Database Phase I(LIVE-I). Comparative analysis is carried out in related to 10 state-of-the-art SIQA metrics. To measure the performance,three kinds of commonly-used performance indicators are involved in,including Spearman rank ordered correlation coefficient(SROCC),Pearson linear correlation coefficient(PLCC),and the root-mean-squared error(RMSE). The experimental results demonstrate that the SROCCs and the PLCCs(higher is better) of the proposed method are higher than 0. 95. Furthermore,the RMSEs(lower is better) of the proposed method can be reached to a potential lower degree. Additionally,the proposed classifier is tested on LIVE-II,IVC-I,and IVC-II databases. For LIVE-II database,95. 46% of asymmetrically distorted stereoscopic images can be classified accurately. For IVC-I and IVC-II databases,each of classification accuracy of symmetrically distorted images can be reached to 94. 76% and 98. 97%,and each of the classification accuracy of asymmetrically distorted images can be reached to 92. 64% and 96. 22% as well. Conclusion The degradation level can be quantified for the two views of asymmetrically distorted stereoscopic images. The image quality degradation coefficients are employed to fuse the monocular features,and it is beneficial to develop a more descriptive binocular perception feature vector and an improved prediction accuracy and robustness of asymmetrically distorted stereoscopic images. The proposed classifier can be used to clarify the symmetrically distorted stereoscopic images and the asymmetrically distorted stereoscopic images as well.
Keywords

订阅号|日报