无参考图像质量评价研究进展
Progress in no-reference image quality assessment
- 2021年26卷第2期 页码:265-286
纸质出版日期: 2021-02-16 ,
录用日期: 2020-07-27
DOI: 10.11834/jig.200274
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2021-02-16 ,
录用日期: 2020-07-27
移动端阅览
方玉明, 眭相杰, 鄢杰斌, 刘学林, 黄丽萍. 无参考图像质量评价研究进展[J]. 中国图象图形学报, 2021,26(2):265-286.
Yuming Fang, Xiangjie Sui, Jiebin Yan, Xuelin Liu, Liping Huang. Progress in no-reference image quality assessment[J]. Journal of Image and Graphics, 2021,26(2):265-286.
图像质量评价一直是图像处理和计算机视觉领域的一个基础问题,图像质量评价模型也广泛应用于图像/视频编码、超分辨率重建和图像/视频视觉质量增强等相关领域。图像质量评价主要包括全参考图像质量评价、半参考图像质量评价和无参考图像质量评价。全参考图像质量评价和半参考图像质量评价分别指预测图像质量时参考信息完全可用和部分可用,而无参考图像质量评价是指预测图像质量时参考信息不可用。虽然全参考和半参考图像质量评价模型较为可靠,但在计算过程中必须依赖参考信息,使得应用场景极为受限。无参考图像质量评价模型因不需要依赖参考信息而有较强的适用性,一直都是图像质量评价领域研究的热点。本文主要概述2012—2020年国内外公开发表的无参考图像质量评价模型,根据模型训练过程中是否需要用到主观分数,将无参考图像质量评价模型分为有监督学习和无监督学习的无参考图像质量评价模型。同时,每类模型分成基于传统机器学习算法的模型和基于深度学习算法的模型。对基于传统机器学习算法的模型,重点介绍相应的特征提取策略及思想;对基于深度学习算法的模型,重点介绍设计思路。此外,本文介绍了图像质量评价在新媒体数据中的研究工作及图像质量评价的应用。最后对介绍的无参考图像质量评价模型进行总结,并指出未来可能的发展方向。
Image quality assessment (IQA) has been a fundamental issue in the fields of image processing and computer vision. It has also been extensively applied to other relevant research areas
such as image/video coding
super-resolution and visual enhancement. In general
IQA consists of subjective and objective evaluations. Subjective evaluation always refers to estimating the visual quality of images by subject
with the goal of building test benchmarks. Objective evaluation typically resorts to computational algorithms (i.e.
IQA models) to make visual quality predictions
and its ultimate objective is to provide consistent judgment with subjects. The effectiveness of objective IQA models must be verified on test benchmarks built via subjective evaluation. Undoubtedly
subjective evaluation cannot be fully embedded into multimedia processing applications because such process is time-consuming and labor-intensive. By contrast
an objective IQA model can work efficiently as an important module in multimedia processing applications
playing roles in visual image quality monitoring
image filtering
and visual quality enhancement. Given their availability
research on objective IQA models has elicited considerable attention from industries and academia. Objective IQA models can be classified into three categories: full-reference (FR)
reduced-reference (RR)
and no-reference/blind (NR) models. FR and RR models denote that reference information for estimating the visual quality of images is completely and partially available
respectively. Meanwhile
an NR model indicates that reference information is unavailable for visual quality prediction. Although reference-based IQA models (i.e.
FR and RR models) are relatively reliable
their applications are limited to specific scenarios due to their dependence on reference information. By contrast
NR-IQA models are more flexible than reference-based models because they are free from the constraint of reference information. Consequently
NR-IQA models have consistently been a popular research topic over the past decades. In this study
we introduce NR-IQA models published from 2012 to 2020 to provide a comprehensive survey on feature engineering and end-to-end learning techniques in NR-IQA. In accordance with whether subjective quality scores are involved in training procedures
NR-IQA models are classified into two categories: opinion-aware/supervised and opinion-unaware/unsupervised NR-IQA models. To present a clear and integrated description
each category is further divided into two subclasses: traditional machine learning-based models (MLMs) and deep learning-based models (DLMs). For the former subclass
we mostly investigate their individual feature extraction schemes and the principle behind these schemes. In particular
a widely adopted feature extraction approach in MLMs
namely
natural scene statistics (NSS)
is introduced in this study. The principle of NSS is as follows: some visual features of quality perfect images follow certain associated distributions; meanwhile
different types of distortions will break this rule in corresponding methods. On the basis of this observation/fact
researchers have proposed many NSS-based NR-IQA methods
in which the estimated parameters of the established distributions are used as quality-aware features. Thereafter
a machine learning algorithm is selected to train the IQA models. Another well-known feature extraction approach described in this study relies on dictionary learning
which is frequently accompanied by sparse coding. The core component of this type of feature extraction approach is to learn a dictionary by searching for a group of over-complete bases. Then
these over-complete bases are used to build a reference system for image representation. A test image can be concretely represented directly or indirectly by the constructed dictionary by using sparse indexes or cluster centroids. Image representations are further used as quality-aware features to capture variations in image quality. For the latter subclass (i.e.
DLMs)
the design principles described in detail in this paper mostly correspond to different architectures of deep neural networks. In particular
we introduce three different schemes for designing opinion-aware DLMs and commonly used strategies in opinion-unaware DLMs. To guarantee length balance among various contents and clearly exhibit the differences between NR-IQA models designed for natural images and other types of images
we introduce them separately in subsections. In addition
we provide a brief introduction into IQA research on new media
including virtual reality
light field
and underwater sonar images
along with the applications of IQA models. Finally
an in-depth conclusion about NR-IQA models is drawn in the last section. We summarize the current achievements and limitations of MLMs and DLMs. Furthermore
we highlight the potential development trends and directions of NR-IQA models for further improvements from the perspectives of image contents and NR-IQA models.
图像质量评价人类视觉系统视觉感知自然统计特征机器学习深度学习
image quality assessment (IQA)human visual system (HVS)visual perceptionnatural scene statistics (NSS)machine learningdeep learning
Adhikarla V K, Vinkler M, Sumin D, Mantiuk R K, Myszkowski K, Seidel H P and Didyk P. 2017. Towards a quality metric for dense light fields//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 3720-3729[DOI: 10.1109/CVPR.2017.396http://dx.doi.org/10.1109/CVPR.2017.396]
Athar S and Wang Z. 2019. A comprehensive performance evaluation of image quality assessment algorithms. IEEE Access, 7: 140030-140070[DOI: 10.1109/ACCESS.2019.2943319]
Chen M J, Cormack L K and Bovik A C. 2013a. No-reference quality assessment of natural stereopairs. IEEE Transactions on Image Processing, 22(9): 3379-3391[DOI: 10.1109/TIP.2013.2267393]
Chen M J, Su C C, Kwon D K, Cormack L K and Bovik A C. 2013b. Full-reference quality assessment of stereopairs accounting for rivalry. Signal Processing: Image Communication, 28(9): 1143-1155[DOI: 10.1016/j.image.2013.05.006]
Chen Q F, Xu J and Koltun V. 2017a. Fast image processing with fully-convolutional networks//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2516-2525[DOI: 10.1109/ICCV.2017.273http://dx.doi.org/10.1109/ICCV.2017.273]
Chen W L, Gu K, Lin W S, Xia Z F, le Callet P and Cheng E. 2019. Reference-free quality assessment of sonar images via contour degradation measurement. IEEE Transactions on Image Processing, 28(11): 5336-5351[DOI: 10.1109/TIP.2019.2910666]
Chen W L, Yuan F, Cheng E and Lin W S. 2017b. Subjective and objective quality evaluation of sonar images for underwater acoustic transmission//Proceedings of 2017 IEEE International Conference on Image Processing. Beijing, China: IEEE: 176-180[DOI: 10.1109/ICIP.2017.8296266http://dx.doi.org/10.1109/ICIP.2017.8296266]
Chen X, Li L D, Li Q Y, Han X X and Zhu H C. 2019. No-reference quality assessment of depth images based on natural scenes statistics. Computer Science, 46(6): 256-262
陈曦, 李雷达, 李巧月, 韩习习, 祝汉城. 2019.基于自然场景统计的深度图像质量无参考评价方法.计算机科学, 46(6): 256-262)[DOI: 10.11896/j.issn.1002-137X.2019.06.038]
Dabov K, Foi A, Katkovnik V and Egiazarian K. 2007. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8): 2080-2095[DOI: 10.1109/TIP.2007.901238]
Dai J F, Li Y, He K M and Sun J. 2016. R-FCN: object detection via region-based fully convolutional networks//Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: ACM: 379-387
Ding K Y, Ma K D, Wang S Q and Simoncelli E P. 2020. A comparative study of image quality assessment models through perceptual optimization[EB/OL].[2020-05-08].https://arxiv.org/pdf/2005.01338v1.pdfhttps://arxiv.org/pdf/2005.01338v1.pdf
Dong H P and Liu L X. 2014. No-reference image quality assessment in mutual information domain. Journal of Image and Graphics, 19(3): 484-492
董宏平, 刘利雄. 2014.互信息域中的无参考图像质量评价.中国图象图形学报, 19(3): 484-492)[DOI: 10.11834/jig.20140220]
Dong W S, Wang P Y, Yin W T, Shi G M, Wu F F and Lu X T. 2019. Denoising prior driven deep neural network for image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(10): 2305-2318[DOI: 10.1109/TPAMI.2018.2873610]
Duan H Y, Zhai G T, Min X K, Zhu Y C, Fang Y and Yang X K. 2018. Perceptual quality assessment of omnidirectional images//IEEE International Symposium on Circuits and Systems. Florence, Italy: IEEE: 1-5[DOI: 10.1109/ISCAS.2018.8351786http://dx.doi.org/10.1109/ISCAS.2018.8351786]
Everingham M, Eslami S M A, Van Gool L, Williams C K I, Winn J and Zisserman A. 2014. The pascal visual object classes challenge: a retrospective. International Journal of Computer Vision, 111(1): 98-136[DOI: 10.1007/s11263-014-0733-5]
Fang Y M, Fang Z J, Yuan F N, Yang Y, Yang S Y and Xiong N N. 2017a. Optimized multioperator image retargeting based on perceptual similarity measure. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(11): 2956-2966[DOI: 10.1109/TSMC.2016.2557225]
Fang Y M, Ma K D, Wang Z, Lin W S, Fang Z J and Zhai G T. 2015. No-reference quality assessment of contrast-distorted images based on natural scene statistics. IEEE Signal Processing Letters, 22(7): 838-842[DOI: 10.1109/LSP.2014.2372333]
Fang Y M, Wei K K, Hou J H, Wen W Y and Imamoglu N. 2018a. Light filed image quality assessment by local and global features of epipolar plane image//Proceedings of the 4th IEEE International Conference on Multimedia Big Data. Xi'an, China: IEEE: 1-6[DOI: 10.1109/BigMM.2018.8499086http://dx.doi.org/10.1109/BigMM.2018.8499086]
Fang Y M, Yan J B, Du R G, Zuo Y F, Wen W Y, Zeng Y and Li L D. 2020a. Blind quality assessment for tone-mapped images by analysis of gradient and chromatic statistics[EB/OL].[2020-05-08].https://ieeexplore.ieee.org/document/9082863https://ieeexplore.ieee.org/document/9082863
Fang Y M, Yan J B, Li L D, Wu J J and Lin W S. 2018b. No reference quality assessment for screen content images with both local and global feature representation. IEEE Transactions on Image Processing, 27(4): 1600-1610[DOI: 10.1109/TIP.2017.2781307]
Fang Y M, Yan J B, Liu J Y, Wang S Q, Li Q H and Guo Z M. 2017b. Objective quality assessment of screen content images by uncertainty weighting. IEEE Transactions on Image Processing, 26(4): 2016-2027[DOI: 10.1109/TIP.2017.2669840]
Fang Y M, Yan J B, Liu X L and Wang J H. 2019a. Stereoscopic image quality assessment by deep convolutional neural network. Journal of Visual Communication and Image Representation, 58: 400-406[DOI: 10.1016/j.jvcir.2018.12.006]
Fang Y M, Yan J B, Wang J H, Liu X L,Zhai G T and Le Callet P. 2019b. Learning a no-reference quality predictor of stereoscopic images by visual binocular properties. IEEE Access, 7: 132649-132661[DOI: 10.1109/ACCESS.2019.2941112]
Fang Y M, Yuan Y, Li L D, Wu J J, Lin W S and Li Z Q. 2017c. Performance evaluation of visual tracking algorithms on video sequences with quality degradation. IEEE Access, 5: 2430-2441[DOI: 10.1109/ACCESS.2017.2666218]
Fang Y M, Zeng K, Wang Z, Lin W S, Fang Z J and Lin C W. 2014. Objective quality assessment for image retargeting based on structural similarity. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 4(1): 95-105[DOI: 10.1109/JETCAS.2014.2298919]
Fang Y M, Zhang C, Yang W H, Liu J Y and Guo Z M. 2018c. Blind visual quality assessment for image super-resolution by convolutional neural network. Multimedia Tools and Applications, 77(22): 29829-29846[DOI: 10.1007/s11042-018-5805-z]
Fang Y M, Zhu H W, Ma K D, Wang Z and Li S T. 2020b. Perceptual evaluation for multi-exposure image fusion of dynamic scenes. IEEE Transactions on Image Processing, 29: 1127-1138[DOI: 10.1109/TIP.2019.2940678]
Fang Y M, Zhu H W, Zeng Y, Ma K D and Wang Z. 2020c. Perceptual quality assessment of smartphone photography//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 3677-3686[DOI: 10.1109/CVPR42600.2020.00373http://dx.doi.org/10.1109/CVPR42600.2020.00373]
Gao F and Gao X B. 2014. Active feature learning and its application in blind image quality assessment. Chinese Journal of Computers, 37(10): 2227-2234
高飞, 高新波. 2014.主动特征学习及其在盲图像质量评价中的应用.计算机学报, 37(10): 2227-2234)[DOI: 10.3724/SP.J.1016.2014.02227]
Gao F, Tao D C, Gao X B and Li X L. 2015. Learning to rank for blind image quality assessment. IEEE Transactions on Neural Networks and Learning System, 26(10): 2275-2290[DOI: 10.1109/TNNLS.2014.2377181]
Ghadiyaram D and Bovik A C. 2016. Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing, 25(1): 372-387[DOI: 10.1109/TIP.2015.2500021]
Gu K. 2015. Research on Perceptual and Statistical Image Quality Assessment and Its Application. Shanghai: Shanghai Jiao Tong University
顾锞. 2015.基于感知和统计模型的图像质量评价技术及应用研究.上海: 上海交通大学
Gu K, Zhai G T, Lin W S, Yang X K and Zhang W J. 2015b. No-reference image sharpness assessment in autoregressive parameter space. IEEE Transactions on Image Processing, 24(10): 3218-3231[DOI: 10.1109/TIP.2015.2439035]
Gu K, Zhai G T, Yang X K and Zhang W J. 2014. Hybrid no-reference quality metric for singly and multiply distorted images. IEEE Transactions on Broadcasting, 60(3): 555-567[DOI: 10.1109/TBC.2014.2344471]
Gu K, Zhai G T, Yang X K and Zhang W J. 2015a. Using free energy principle for blind image quality assessment. IEEE Transactions on Multimedia, 17(1): 50-63[DOI: 10.1109/TMM.2014.2373812]
Gu K, Zhou J, Qiao J F, Zhai G T, Lin W S and Bovik A C. 2017. No-reference quality assessment of screen content pictures. IEEE Transactions on Image Processing, 26(8): 4005-4018[DOI: 10.1109/TIP.2017.2711279]
Guo S, Yan Z F, Zhang K, Zuo W M and Zhang L. 2019. Toward convolutional blind denoising of real photographs//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 1712-1722[DOI: 10.1109/CVPR.2019.00181http://dx.doi.org/10.1109/CVPR.2019.00181]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 770-778[DOI: 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
Hosu V, Lin H H, Sziranyi T and Saupe D. 2020. KonIQ-10k: an ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 29: 4041-4056[DOI: 10.1109/TIP.2020.2967829]
Huang M K, Shen Q, Ma Z, Bovik A C, Gupta P, Zhou R B and Cao X. 2018. Modeling the perceptual quality of immersive images rendered on head mounted displays: resolution and compression. IEEE Transactions on Image Processing, 27(12): 6039-6050[DOI: 10.1109/TIP.2018.2865089]
Jia H Z, Sun Q S and Wang T H. 2014. Blind image quality assessment based on perceptual features and natural scene statistics. Journal of Image and Graphics, 19(6): 859-867
贾惠珍, 孙权森, 王同罕. 2014.结合感知特征和自然场景统计的无参考图像质量评价.中国图象图形学报, 19(6): 859-867)[DOI: 10.11834/jig.20140606]
Jiang Q P, Shao F and Jiang G Y. 2017. MSFE: blind image quality assessment based on multi-stage feature encoding//Proceedings of 2017 IEEE International Conference on Image Processing. Beijing, China: IEEE: 3160-3164[DOI: 10.1109/ICIP.2017.8296865http://dx.doi.org/10.1109/ICIP.2017.8296865]
Jiang X H, Shen L Q, Yu L W, Jiang M X and Feng G R. 2020. Noreference screen content image quality assessment based on multi-region features. Neurocomputing, 386: 30-41[DOI: 10.1016/j.neucom.2019.12.027]
Kalwa J and Madsen A L. 2004. Sonar image quality assessment for an autonomous underwater vehicle//Proceedings of 2004 World Automation Congress. Seville, Spain: IEEE, 2004: 33-38
Kang L, Ye P, Li Y and Doermann D. 2014. Convolutional neural networks for no-reference image quality assessment//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE: 1733-1740[DOI: 10.1109/CVPR.2014.224http://dx.doi.org/10.1109/CVPR.2014.224]
Kang L, Ye P, Li Y and Doermann D. 2015. Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks//Proceedings of 2015 IEEE International Conference on Image Processing. Quebec City, Canada: IEEE: 2791-2795[DOI: 10.1109/ICIP.2015.7351311http://dx.doi.org/10.1109/ICIP.2015.7351311]
Kim H G, Jeong H, Lim H T and Ro Y M. 2019a. Binocular fusion net: deep learning visual comfort assessment for stereoscopic 3D. IEEE Transactions on Circuits and Systems for Video Technology, 29(4): 956-967[DOI: 10.1109/TCSVT.2018.2817250]
Kim H G, Lim H T and Ro Y M. 2020. Deep virtual reality image quality assessment with human perception guider for omnidirectional image. IEEE Transactions on Circuits and Systems for Video Technology, 30(4): 917-928[DOI: 10.1109/TCSVT.2019.2898732]
Kim J, Nguyen A D and Lee S. 2019b. Deep CNN-based blind image quality predictor. IEEE Transactions on Neural Networks and Learning Systems, 30(1): 11-24[DOI: 10.1109/TNNLS.2018.2829819]
Kundu D and Evans B L. 2014. Spatial domain synthetic scene statistics//Proceedings of the 48th Asilomar Conference on Signals, Systems and Computers. Pacific Grove, USA: IEEE: 948-954[DOI: 10.1109/ACSSC.2014.7094593http://dx.doi.org/10.1109/ACSSC.2014.7094593]
Kundu D, Choi L K, Bovik A C and Evans B L. 2018. Perceptual quality evaluation of synthetic pictures distorted by compression and transmission. Signal Processing: Image Communication, 61: 54-72[DOI: 10.1016/j.image.2017.11.004]
Kundu D, Ghadiyaram D, Bovik A C and Evans B L. 2017. Large-scale crowdsourced study for tone-mapped HDR pictures. IEEE Transactions on Image Processing, 26(10): 4725-4740[DOI: 10.1109/TIP.2017.2713945]
Larson E C and Chandler D M. 2009. Categorical subjective image quality CSIQ database[DB/OL].[2020-07-08].http://vision.okstate.edu/csiq/http://vision.okstate.edu/csiq/
le Callet P and Autrusseau F. 2005. Subjective quality assessment IRCCyN/IVC database[DB/OL].[2020-07-08].http://www2.irccyn.ec-nantes.fr/ivcdb/http://www2.irccyn.ec-nantes.fr/ivcdb/
Li D, Yang Y X, Song Y Z and Hospedales T M. 2018. Learning to generalize: meta-learning for domain generalization//Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans, USA: AAAI Press: 3490-3497
Li D Q, Jiang T T, Lin W S and Jiang M. 2019a. Which has better visual quality: the clear blue sky or a blurry animal? IEEE Transactions on Multimedia, 21(5): 1221-1234[DOI: 10.1109/TMM.2018.2875354]
Li L D, Zhou Y, Gu K, Yang Y Z and Fang Y M. 2019b. Blind realistic blur assessment based on discrepancy learning. IEEE Transactions on Circuits and Systems for Video Technology, 30(11): 3859-3869[DOI: 10.1109/TCSVT.2019.2947450]
Li Q H, Lin W S and Fang Y M. 2016a. No-reference quality assessment for multiply-distorted images in gradient domain. IEEE Signal Processing Letters, 23(4): 541-545[DOI: 10.1109/LSP.2016.2537321]
Li Q H, Lin W S, Xu J T and Fang Y M. 2016b. Blind image quality assessment using statistical structural and luminance features. IEEE Transactions on Multimedia, 18(12): 2457-2469[DOI: 10.1109/TMM.2016.2601028]
Lin K Y and Wang G X. 2018. Hallucinated-IQA: no-reference image quality assessment via adversarial learning//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 732-741[DOI: 10.1109/CVPR.2018.00083http://dx.doi.org/10.1109/CVPR.2018.00083]
Liu X L, van de Weijer J and Bagdanov A D. 2017. RankIQA: learning from rankings for no-reference image quality assessment//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 1040-1049[DOI: 10.1109/ICCV.2017.118http://dx.doi.org/10.1109/ICCV.2017.118]
Liu Y T, Gu K, Zhang Y B, Li X, Zhai G T, Zhao D B and Gao W. 2020. Unsupervised blind image quality evaluation via statistical measurements of structure, naturalness, and perception. IEEE Transactions on Circuits and Systems for Video Technology, 30(4): 929-943[DOI: 10.1109/TCSVT.2019.2900472]
Lyu Y Q, Yu M, Jiang G Y, Shao F, Peng Z J and Chen F. 2016. Noreference stereoscopic image quality assessment using binocular self-similarity and deep neural network. Signal Processing: Image Communication, 47: 346-357[DOI: 10.1016/j.image.2016.07.003]
Ma C, Yang C Y, Yang X K and Yang M H. 2017c. Learning a no-reference quality metric for single-image super-resolution. Computer Vision and Image Understanding, 158: 1-16[DOI: 10.1016/j.cviu.2016.12.009]
Ma K D, Duanmu Z F, Wang Z, Wu Q B, Liu W T, Yong H W, Li H L and Zhang L. 2020a. Group maximum differentiation competition: model comparison with few samples. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4): 851-864[DOI: 10.1109/TPAMI.2018.2889948]
Ma K D, Duanmu Z F, Wu Q B, Wang Z, Yong H W, Li H L and Zhang L. 2017a. Waterloo exploration database: new challenges for image quality assessment models. IEEE Transactions on Image Processing, 26(2): 1004-1016[DOI: 10.1109/TIP.2016.2631888]
Ma K D, Duanmu Z F, Yeganeh H and Wang Z. 2018a. Multi-exposure image fusion by optimizing a structural similarity index. IEEE Transactions on Computational Imaging, 4(1): 60-72[DOI: 10.1109/TCI.2017.2786138]
Ma K D, Duanmu Z F, Zhu H W, Fang Y M and Wang Z. 2020b. Deep guided learning for fast multi-exposure image fusion. IEEE Transactions on Image Processing, 29: 2808-2819[DOI: 10.1109/TIP.2019.2952716]
Ma K D, Liu W T, Liu T L, Wang Z and Tao D C. 2017b. dipIQ: blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Transactions on Image Processing, 26(8): 3951-3964[DOI: 10.1109/TIP.2017.2708503]
Ma K D, Liu W T, Zhang K, Duanmu Z F, Wang Z and Zuo W M. 2018b. End-to-end blind image quality assessment using deep neural networks. IEEE Transactions on Image Processing, 27(3): 1202-1213[DOI: 10.1109/TIP.2017.2774045]
Ma K D, Liu X L, Fang Y M and Simoncelli E P. 2019. Blind image quality assessment by learning from multiple annotators//Proceedings of 2019 IEEE International Conference on Image Processing. Taipei, China: IEEE: 2344-2348[DOI: 10.1109/ICIP.2019.8803390http://dx.doi.org/10.1109/ICIP.2019.8803390]
Ma K D, Yeganeh H, Zeng K and Wang Z. 2015. High dynamic range image compression by optimizing tone mapped image quality index. IEEE Transactions on Image Processing, 24(10): 3086-3097[DOI: 10.1109/TIP.2015.2436340]
Min X K, Gu K, Zhai G T, Liu J, Yang X K and Chen C W. 2018. Blind quality assessment based on pseudo-reference image. IEEE Transactions on Multimedia, 20(8): 2049-2062[DOI: 10.1109/TMM.2017.2788206]
Mittal A, Moorthy A K and Bovik A C. 2012. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 21(12): 4695-4708[DOI: 10.1109/TIP.2012.2214050]
Mittal A, Soundararajan R and Bovik A C. 2013. Making a "completely blind" image quality analyzer. IEEE Signal Processing Letters, 20(3): 209-212[DOI: 10.1109/LSP.2012.2227726]
Moorthy A K, Su C C, Mittal A and Bovik A C. 2013. Subjective evaluation of stereoscopic image quality. Signal Processing: Image Communication, 28(8): 870-883[DOI: 10.1016/j.image.2012.08.004]
Muandet K, Balduzzi D and Schölkopf B. 2013. Domain generalization via invariant feature representation//Proceedings of the 30th International Conference on Machine Learning. Atlanta, USA: ACM: 10-18
Ni Z K, Ma L, Zeng H Q, Chen J, Cai C H and Ma K K. 2017. ESIM: edge similarity for screen content image quality assessment. IEEE Transactions on Image Processing, 26(10): 4818-4831[DOI: 10.1109/TIP.2017.2718185]
Niu Y Z, Zhong Y N, Guo W Z, Shi Y Q and Chen P K. 2019. 2D and 3D image quality assessment: a survey of metrics and challenges. IEEE Access, 7: 782-801[DOI: 10.1109/ACCESS.2018.2885818]
Oh H, Ahn S, Kim J and Lee S. 2017. Blind deep S3D image quality evaluation via local to global feature aggregation. IEEE Transactions on Image Processing, 26(10): 4923-4936[DOI: 10.1109/TIP.2017.2725584]
Pan D, Shi P, Hou M, Ying Z F, Fu S Z and Zhang Y. 2018. Blind predicting similar quality map for image quality assessment//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 6373-6382[DOI: 10.1109/CVPR.2018.00667http://dx.doi.org/10.1109/CVPR.2018.00667]
Paudyal P, Battisti F, Sjöström M, Olsson R and Carli M. 2017. Towards the perceptual quality evaluation of compressed light field images. IEEE Transactions on Broadcasting, 63(3): 507-522[DOI: 10.1109/TBC.2017.2704430]
Ponomarenko N, Jin L N, Ieremeiev O, Lukin V, Egiazarian K, Astola J, Vozel B, Chehdi K, Carli M, Battisti F and Jay Kuo C C. 2015. Image database TID2013: peculiarities, results and perspectives. Signal Processing: Image Communication, 30: 57-77[DOI: 10.1016/j.image.2014.10.009]
Ponomarenko N, Lukin V, Zelensky A, Egiazarian K, Carli M and Battisti F. 2009. TID2008-a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics, 10(4): 30-45
Rehman A and Wang Z. 2012. Reduced-reference image quality assessment by structural similarity estimation. IEEE Transactions on Image Processing, 21(8): 3378-3389[DOI: 10.1109/TIP.2012.2197011]
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241[DOI: 10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Sheikh H R, Wang Z, Cormack L and Bovik A C. 2006. LIVE image quality assessment database release 2[DB/OL].[2020-07-08].https://live.ece.utexas.edu/research/quality/https://live.ece.utexas.edu/research/quality/
Shi L K, Zhao S Y and Chen Z B. 2019a. Belif: blind quality evaluator of light field image with tensor structure variation index//Proceedings of 2019 IEEE International Conference on Image Processing. Taipei, China: IEEE: 3781-3785[DOI: 10.1109/ICIP.2019.8803559http://dx.doi.org/10.1109/ICIP.2019.8803559]
Shi L K, Zhao S Y, Zhou W and Chen Z B. 2018. Perceptual evaluation of light field image//Proceedings of the 25th IEEE International Conference on Image Processing. Athens, Greece: IEEE: 41-45[DOI: 10.1109/ICIP.2018.8451077http://dx.doi.org/10.1109/ICIP.2018.8451077]
Shi L K, Zhou W, Chen Z B and Zhang J L. 2019b. No-reference light field image quality assessment based on spatial-angular measurement. IEEE Transactions on Circuits and Systems for Video Technology, 30(11): 4114-4128[DOI: 10.1109/TCSVT.2019.2955011]
Simonyan K and Zisserman A. 2015. Very deep convolutional networks for large-scale image recognition//Proceedings of the 3rd International Conference on Learning Representations. San Diego, USA: [s.n.]: 1-14
Sui X J, Ma K D, Yao Y R and Fang Y M. 2020. Omnidirectional images as moving camera videos[EB/OL].[2020-07-08].https://arxiv.org/pdf/2005.10547.pdfhttps://arxiv.org/pdf/2005.10547.pdf
Sun W, Gu K, Ma S W, Zhu W H, Liu N and Zhai G T. 2018. A large-scale compressed 360-degree spherical image database: from subjective quality evaluation to objective model comparison//Proceedings of the 20th IEEE International Workshop on Multimedia Signal Processing. Vancouver, Canada: IEEE: 1-6[DOI: 10.1109/MMSP.2018.8547102http://dx.doi.org/10.1109/MMSP.2018.8547102]
Sun W, Min X K, Zhai G T, Gu K, Duan H Y and Ma S W. 2020. MC360IQA: a multi-channel CNN for blind 360-degree image quality assessment. IEEE Journal of Selected Topics in Signal Processing, 14(1): 64-77[DOI: 10.1109/JSTSP.2019.2955024]
Talebi H and Milanfar P. 2018. NIMA: neural image assessment. IEEE Transactions on Image Processing, 27(8): 3998-4011[DOI: 10.1109/TIP.2018.2831899]
Video Quality Experts Group (VQEG). 2000. Final report from the video quality experts group on the validation of objective models of video quality assessment[EB/OL].[2020-07-08].http://videoclarity.com/PDF/COM-80E_final_report.pdfhttp://videoclarity.com/PDF/COM-80E_final_report.pdf
Viola I and Ebrahimi T. 2018. VALID: visual quality assessment for light field images dataset//Proceedings of the 10th International Conference on Quality of Multimedia Experience. Cagliari, Italy: IEEE: 1-3[DOI: 10.1109/QoMEX.2018.8463388http://dx.doi.org/10.1109/QoMEX.2018.8463388]
Wang G C, Li L D, Li Q H, Gu K, Lu Z L and Qian J S. 2017. Perceptual evaluation of single-image super-resolution reconstruction//Proceedings of 2017 IEEE International Conference on Image Processing. Beijing, China: IEEE: 3145-3149[DOI: 10.1109/ICIP.2017.8296862http://dx.doi.org/10.1109/ICIP.2017.8296862]
Wang J H, Rehman A, Zeng K, Wang S Q and Wang Z. 2015. Quality prediction of asymmetrically distorted stereoscopic 3D images. IEEE Transactions on Image Processing, 24(11): 3400-3414[DOI: 10.1109/TIP.2015.2446942]
Wang J P, Li C and Chen W H. 2017. Blind image quality assessment based on DCT features in the non-flat region and EGRNN. Chinese Journal of Computers, 40(11): 2492-2505
王俊平, 李超, 陈伟华. 2017.基于图像非平坦区域DCT特性和EGRNN的盲图像质量评价.计算机学报, 40(11): 2492-2505)[DOI: 10.11897/SP.J.1016.2017.02492]
Wang Z, Bovik A C, Sheikh H R and Simoncelli E P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600-612[DOI: 10.1109/TIP.2003.819861]
Wang Z M. 2015. Review of no-reference image quality assessment. Acta Automatica Sinica, 41(6): 1062-1079
王志明. 2015.无参考图像质量评价综述.自动化学报, 41(6): 1062-1079)[DOI: 10.16383/j.aas.2015.c140404]
Wu J J. 2014. Image Information Perception and Quality Assessment Based on the Human Visual System. Xi'an: Xidian University
吴金建. 2014.基于人类视觉系统的图像信息感知和图像质量评价.西安: 西安电子科技大学
Wu J J, Lin W S, Shi G M and Liu A M. 2013. Perceptual quality metric with internal generative mechanism. IEEETransactions on Image Processing, 22(1): 43-54[DOI: 10.1109/TIP.2012.2214048]
Wu J J, Zhang M, Li L D, Dong W S, Shi G M and Lin W S. 2019. No-reference image quality assessment with visual pattern degradation. Information Sciences, 504: 487-500[DOI: 10.1016/j.ins.2019.07.061]
Wu Q B, Wang Z and Li H L. 2015. A highly efficient method for blind image quality assessment//Proceedings of 2015 IEEE International Conference on Image Processing. Quebec City, Canada: IEEE: 339-343[DOI: 10.1109/ICIP.2015.7350816http://dx.doi.org/10.1109/ICIP.2015.7350816]
Xu J, Zhang L and Zhang D. 2018. External prior guided internal prior learning for real-world noisy image denoising. IEEE Transactions on Image Processing, 27(6): 2996-3010[DOI: 10.1109/TIP.2018.2811546]
Xu J H, Zhou W and Chen Z B. 2020b. Blind omnidirectional image quality assessment with viewport oriented graph convolutional networks[EB/OL].[2020-07-08].https://arxiv.org/pdf/2002.09140v1.pdfhttps://arxiv.org/pdf/2002.09140v1.pdf
Xu J T, Ye P, Li Q H, Du H Q, Liu Y and Doermann D. 2016. Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing, 25(9): 4444-4457[DOI: 10.1109/TIP.2016.2585880]
Xu M, Li C, Zhang S Y and le Callet P. 2020a. State-of-the-art in 360° video/image processing: perception, assessment and compression. IEEE Journal of Selected Topics in Signal Processing, 14(1): 5-26[DOI: 10.1109/JSTSP.2020.2966864]
Xue W F, Mou X Q, Zhang L, Bovik A C and Feng X C. 2014. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Transactions on Image Processing, 23(11): 4850-4862[DOI: 10.1109/TIP.2014.2355716]
Xue W F, Zhang L and Mou Z Q. 2013. Learning without human scores for blind image quality assessment//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE: 995-1002[DOI: 10.1109/CVPR.2013.133http://dx.doi.org/10.1109/CVPR.2013.133]
Yan B, Bare B, Ma C X, Li K and Tan W M. 2019a. Deep objective quality assessment driven single image super-resolution. IEEE Transactions on Multimedia, 21(11): 2957-2971[DOI: 10.1109/TMM.2019.2914883]
Yan B, Bare B and Tan W M. 2019b. Naturalness-aware deep no-reference image quality assessment. IEEE Transactions on Multimedia, 21(10): 2603-2615[DOI: 10.1109/TMM.2019.2904879]
Yan J B. 2018. No-Reference Stereoscopic Image Quality Assessment Based on Binocular Visual Perception. Nanchang: Jiangxi University of Finance and Economics
鄢杰斌. 2018.基于双目视觉感知的无参考三维图像视觉质量评价.南昌: 江西财经大学
Yan J B, Fang Y M, Huang L P, Min X K, Yao Y R and Zhai G T. 2020. Blind stereoscopic image quality assessment by deep neural network of multi-level feature fusion//Proceedings of 2020 IEEE International Conference on Multimedia and Expo. London, UK: IEEE: 1-6[DOI: 10.1109/ICME46284.2020.9102888http://dx.doi.org/10.1109/ICME46284.2020.9102888]
Yan Q S, Gong D and Zhang Y N. 2019c. Two-stream convolutional networks for blind image quality assessment. IEEE Transactions on Image Processing, 28(5): 2200-2211[DOI: 10.1109/TIP.2018.2883741]
Yang H, Fang Y M and Lin W S. 2015. Perceptual quality assessment of screen content images. IEEE Transactions on Image Processing, 24(11): 4408-4421[DOI: 10.1109/TIP.2015.2465145]
Yang S, Jiang Q P, Lin W S and Wang Y T. 2019b. SGDNet: an end-to-end saliency-guided deep neural network for no-reference image quality assessment//Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM: 1383-1391[DOI: 10.1145/3343031.3350990http://dx.doi.org/10.1145/3343031.3350990]
Yang W M, Zhang X C, Tian Y P, Wang W, Xue J H and Liao Q M. 2019a. Deep learning for single image super-resolution: a brief review. IEEE Transactions on Multimedia, 21(12): 3106-3121[DOI: 10.1109/TMM.2019.2919431]
Yang X H, Li F and Liu H T. 2019c. A survey of DNN methods for blind image quality assessment. IEEE Access, 7: 123788-123806[DOI: 10.1109/ACCESS.2019.2938900]
Ye P, Kumar J and Doermann D. 2014. Beyond human opinion scores: blind image quality assessment based on synthetic scores//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE: 4241-4248[DOI: 10.1109/CVPR.2014.540http://dx.doi.org/10.1109/CVPR.2014.540]
Ye P, Kumar J, Kang L and Doermann D. 2012. Unsupervised feature learning framework for no-reference image quality assessment//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA: IEEE: 1098-1105[DOI: 10.1109/CVPR.2012.6247789http://dx.doi.org/10.1109/CVPR.2012.6247789]
Yeganeh H and Wang Z. 2013. Objective quality assessment of tone-mapped images. IEEE Transactions on Image Processing, 22(2): 657-667[DOI: 10.1109/TIP.2012.2221725]
Ying Z Q, Niu H R, Gupta P, Mahajan D, Ghadiyaram D and Bovik A C. 2020. From patches to pictures (PaQ-2-PiQ): mapping the perceptual space of picture quality//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 3575-3585[DOI: 10.1109/CVPR42600.2020.00363http://dx.doi.org/10.1109/CVPR42600.2020.00363]
Yue G H, Hou C P, Yan W Q, Choi L K, Zhou T W and Hou Y H. 2018. Blind quality assessment for screen content images via convolutional neural network. Digital Signal Processing, 91: 21-30[DOI: 10.1016/j.dsp.2018.12.007]
Zhai G T and Min X K. 2020. Perceptual image quality assessment: a survey. Science China Information Sciences, 63(11): #211301[DOI: 10.1007/s11432-019-2757-1]
Zhang L, Zhang L and Bovik A C. 2015. A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing, 24(8): 2579-2591[DOI: 10.1109/TIP.2015.2426416]
Zhang L, Zhang L, Mou X Q and Zhang D. 2011. FSIM: a feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8): 2378-2386[DOI: 10.1109/TIP.2011.2109730]
Zhang W X, Ma K D, Yan J, Deng D X and Wang Z. 2020. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 30(1): 36-47[DOI: 10.1109/TCSVT.2018.2886771]
Zhang X F, Xiong R Q, Fan X P, Ma S W and Gao W. 2013. Compression artifact reduction by overlapped-block transform coefficient estimation with block similarity. IEEE Transactions on Image Processing, 22(12): 4613-4626[DOI: 10.1109/TIP.2013.2274386]
Zhang Y and Chandler D M. 2018. Opinion-unaware blind quality assessment of multiply and singly distorted images via distortion parameter estimation. IEEE Transactions on Image Processing, 27(11): 5433-5448[DOI: 10.1109/TIP.2018.2857413]
Zhao H, Gallo O, Frosio I and Kautz J. 2017. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging, 3(1): 47-57[DOI: 10.1109/TCI.2016.2644865]
Zhou W, Chen Z B and Li W P. 2019. Dual-stream interactive networks for no-reference stereoscopic image quality assessment. IEEE Transactions on Image Processing, 28(8): 3946-3958[DOI: 10.1109/TIP.2019.2902831]
Zhou W, Jiang Q P, Wang Y W, Chen Z B and Li W P. 2020a. Blind quality assessment for image superresolution using deep two-stream convolutional networks. Information Sciences, 528: 205-218[DOI: 10.1016/j.ins.2020.04.030]
Zhou W, Shi L K, Chen Z B and Zhang J L. 2020b. Tensor oriented noreference light field image quality assessment. IEEE Transactions on Image Processing, 29: 4070-4084[DOI: 10.1109/TIP.2020.2969777]
Zhu H C, Li L D, Wu J J, Dong W S and Shi G M. 2020. MetaIQA: deep meta-learning for no-reference image quality assessment//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 14143-14152[DOI: 10.1109/CVPR42600.2020.01415http://dx.doi.org/10.1109/CVPR42600.2020.01415]
相关作者
相关机构