|
发布时间: 2018-04-16 |
GDC 2017会议专栏 |
|
|
收稿日期: 2017-07-10; 修回日期: 2017-09-28
基金项目: 国家自然科学基金项目(61320106008,61502541,61772140);广东省自然科学基金-博士启动基金项目(2016A030310202);中央高校基本科研业务费专项资金-中山大学青年教师培育基金项目(16lgpy39);广东省科技计划基金项目(2015B010129008)
第一作者简介:
李浪宇(1993-), 男, 中山大学计算机技术专业硕士研究生, 主要研究方向为数字图像处理。E-mail: lily43@mail2.sysu.edu.cn.
中图法分类号: TP391
文献标识码: A
文章编号: 1006-8961(2018)04-0572-11
|
摘要
目的 现有的超分辨卷积神经网络为了获得良好的高分辨率图像重建效果需要越来越深的网络层次和更多的训练,因此存在了对于样本数量依懒性大,参数众多致使训练困难以及训练所需迭代次数大,硬件需求大等问题。针对存在的这些问题,本文提出一种改进的超分辨率重建网络模型。方法 本文区别于传统的单输入模型,采取了一种双输入细节互补的网络模型,在原有的SRCNN单输入模型特征提取映射网络外,添加了一个新的输入。本文结合图像局部相似性,构建了一个细节补充网络来补充图像特征,并使用一层卷积层将细节补充网络得到的特征与特征提取网络提取的特征融合,恢复重建高分辨率图像。结果 本文分别从主观和客观的角度,对比了本文方法与其他主流方法之间的数据对比和效果对比情况,在与SRCNN在相似网络深度的情况下,本文方法在放大3倍时的PSNR数值在Set5以及Set14数据下分别比SRCNN高出0.17 dB和0.08 dB。在主观的恢复图像效果上,本文方法能够很好的恢复图像边缘以及图像纹理细节。结论 实验证明,本文所提出的细节互补网络模型能够在较少的训练以及比较浅的网络下获得有效的重建图像并且保留更多的图像细节。
关键词
超分辨重建; 深度学习; 卷积神经网络; 非线性映射
Abstract
Objective Single-image super-resolution (SR) is a classical problem in computer vision. In visual information processing, high-resolution images are still desired for considerable useful information, such as medical, remote sensing imaging, video surveillance, and entertainment. However, we can obtain low-resolution images of specific objects in some scenes only, such as long-distance shooting, due to the limitation of physical devices. SR has attracted considerable attention from computer vision communities in the past decades. We address the problem of generating a high-resolution image given a low-resolution image, which is commonly referred to as single-image SR. Early methods include bicubic interpolation, Lanczos resampling, statistical priors, neighbor embedding, and sparse coding. In recent years, a series of convolutional neural network (CNN) models has been proposed for single-image SR. Deep learning attempts to learn layered, hierarchical representations of high-dimensional data. However, the classical CNN for SR is a single-input model that limits its performance. These CNNs require deep networks, considerable training consumption, and a large number of sample images to obtain images with good details. These requirements lead to the use of numerous parameters to train the networks, the increased number of iterations for training, and the need for large hardware. In view of these existing problems, an improved super-resolution reconstruction network model is proposed. Method Unlike the traditional single-input model, we adopt a mutual-detail convolution model with double input. The combination of paths of different scales enables the model to synthesize a wide range of receptive fields. The different features of image blocks with different sizes are complemented at different scales. Low-dimensional and high-dimensional features are combined to supplement the details of the restoration images to improve the quality and detail of reconstructed images. Traditional self-similarity-based methods can also be combined with neural networks. The entire convolution model can be divided into three parts:F1, F2, and F3 networks. F1 is the feature extraction and nonlinearly mapping network with four layers. Filters with spatial sizes of 9×9, and 3×3 are used. F2 is the detail network used to complement the features of F1. F2 consists of two layers and filters with spatial sizes of 11×11 and 5×5. F3 is the reconstruction network. We use mean squared error as the loss function. The loss is minimized using stochastic gradient descent (SGD) with the standard backpropagation. The network takes an original low-resolution image and an interpolated low-resolution image (to the desired size) as inputs and predicts the image details. Our method adds a new input to supplement the high-frequency information that is lost during the reconstruction process. As shown in the literature, deep learning generally benefits from big-data training. We use a training dataset of 500 images from BSD500, and the flipped and rotated versions of the training images are considered. We rotate the original images by 90° and 270°. The training images are split into 33×33 and 39×39, with a stride of 14, by considering training time and storage complexities. We set a mini batch size of SGD to 64 and the momentum parameter to 0.9. Result We use Set5 and Set14 as the validation sets. From previous experiments, we follow the conventional approach to super-resolving color images. We transform the color images into the YCbCr space. The SR algorithms are applied only on the Y channel, whereas the Cb and Cr channels are upscaled by bicubic interpolation. We show the quantitative and qualitative results of our method in comparison with those of state-of-the-art methods. Unlike traditional methods and SRCNN, our method can obtain better peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) values of the experimental results shown in the Set5 and Set14 datasets. For the upscaling factor 3, the average gains on PSNR achieved by our method are 0.17 and 0.08 dB higher than those of the next best approach, SRCNN, on the two datasets. A similar trend is observed when we use SSIM as the performance metric. Unlike the training times of SRCNN, the iterations of our approach are decreased by two orders of magnitude. With a lightweight structure, our method achieves superior performance to that of state-of-the-art methods. Conclusion The experiments show that the proposed method can effectively reconstruct images with considerable details with minimal training and relatively shallow networks. However, unlike the result of a very deep neural network, the result of our method is not sufficiently precise, and the network structure is relatively simple. We will consider using deep layers to acquire numerous image features at different layers and extending our model to several image tasks in the next work.
Key words
super-resolution reconstruction; deep learning; convolution neural network; nonlinear mapping
0 引言
超分辨率重建[1]是指通过单帧或多帧低分辨率图像恢复重建得到对应图像的高分辨图像。超分辨重建是一个不定态问题,即对应问题的解不唯一[2]。这意味着一幅低分辨率图像对应着多个可能的高分辨率图像。如何寻找到对应的高质量高分辨率图像是问题的关键所在。目前超分辨重建的方法主要分3种:基于插值的方法,基于重建的方法和基于学习的方法[3]。基于插值的方法是通过将低分辨率图像的像素点映射到高分辨率图像上,缺失的像素点由已知的像素点来估计,经典的方法有新边缘导向插值方法(NEDI)[4]以及基于局部边缘自适应的插值方法[5-6]。基于重建的方法主要想法是通过挖掘低分辨率图像中高频信息,结合图像的先验信息,求解低分辨率成像的逆过程,恢复图像中的高频信息。比如相似性冗余先验[7-8],梯度轮廓先验[9]等。基于学习的方法随着近些年的机器学习的发展,成为了最近研究的热门[10-12]。基于学习的方法不是通过寻找一个通用的先验知识公式来约束解空间,而是通过大量的样本构建学习集,从而构建出低分辨率图像和高分辨图像之间的映射关系,先验信息被隐含在该映射关系中。
本文采用卷积神经网络来进行学习高—低分辨率图像之间的映射关系。相较于传统的端到端的神经网络中,本文所实现的方法受到残差网络(ResNet)[13]的启发,认为神经网络的低层所产生的特征对于最后的结果也是有帮助的。但是目前的残差网络的层数非常得多,训练花费十分巨大,所以采用了一个更大的输入图像块来补充丢失的细节,模型结构如图 1所示。实验结果表明,在样本数量低、迭代次数少的情况下,本文方法效果相比超分辨率卷积神经网络(SRCNN)[10]要好。
本文的主要贡献包括:
1) 设计和实现了一个细节互补的卷积神经网络模型,利用不同尺寸图像块的细节特征补充重建高分辨图像的细节。
2) 本文的方法在样本数量很少的情况下,能够达到一个很好的效果。相对地,本文方法所需的开销少,所建立的映射关系准确。
3) 证明了本文方法的有效性,并且分别从主观和客观的角度分析了本文方法和主流代表性方法之间的不同之处。
1 相关工作
基于学习的超分辨率方法利用样本图像集合或图像本体的高分辨率和低分辨率版本之间的关系建立先验信息从而实现高效的图像重建效果。伴随着深度学习技术的发展,基于学习的超分辨率方法可以被进一步划分成基于经典学习的方法(流形与稀疏学习等)[14]和基于深度网络表征的超方法[12, 15-16]。下面将分别介绍这两类算法。
1) 基于经典学习的超分辨率方法:这一类方法其主要思想是高—低分辨图像块之间的映射关系可以通过学习获取,从而可用于恢复最可能相似的高分辨率图像。基于实例学习的方法被证明能够突破传统超分辨率重建方法的限制[17-18]。2004年Change等人[19]提出了邻域嵌入的超分辨率算法,通过流形学习思想对每一个低分辨率图像块找到
2) 通过建立深度学习网络框架的图像重建。随着机器学习的发展,近些年来出现了各种基于深度学习的方法来解决超分辨率重建问题。不同于基于经典学习的方法,通过深度学习的图像重建算法使用大量的图像样本,并构建深度网络模型来学习对应高—低分辨率图像之间的映射关系,从而实现超分辨率重建的结果。2014年,Dong等人[10]提出采用卷积神经网络来解决单图像的分辨重建问题。2015年,Kim等人[12]提出了使用20层的深度卷积神经网络来获得高—低分辨图像之间的映射关系。同年Kim等人[27]提出了有效减少网络参数数量的递归卷积神经网络模型(DRCN)在增强所建网络的感受野(receptive field)。2016年Ledig等人[11]提出了利用生成对抗网络来解决超分辨重建的网络模型。这些方法都能很好地解决超分辨重建的问题。但是存在着网络层数过多导致难以训练,以及需要大量的迭代次数和样本数据等问题。
2 细节互补网络模型
定义单帧的低分辨率图像
本文的细节互补网络模型可以分成3个部分, 第1部分是用于特征提取和非线性映射的网络; 第2部分是细节补充网络,用于补充重建图像的特征细节; 第3部分是重建网络,用于最后的高分辨图像重建工作。
2.1 特征提取和非线性映射
首先的工作是需要提取低分辨率图像中所蕴含的有效特征信息。目前已有的一些特征提取的方法,通过将图像与卷积核做卷积操作得到边缘特征信息等[29]。在卷积神经网络当中,同样是将图像与各种卷积核做卷积得到下一层的输入。所以为了提取低维图像特征和获取低维图像特征到高维图像的特征之间的映射,使用了一个3层的网络来实现这一目的,并称这个3层的网络为
第1层
$ {F_{1, 1}} = \max \left( {0, {\mathit{\boldsymbol{W}}_{1, 1}} \otimes Y + {\mathit{\boldsymbol{B}}_{1, 1}}} \right) $ | (1) |
式中,
$ {\rm{ReLU}} = \max \left( {0, x} \right) $ | (2) |
通过第1层的卷积操作,对每一个图像块都提取到了
$ {F_{1, 2}} = \max \left( {0, {\mathit{\boldsymbol{W}}_{1, 2}} \otimes {F_{1, 1}}\left( \mathit{\boldsymbol{Y}} \right) + {\mathit{\boldsymbol{B}}_{1, 2}}} \right) $ | (3) |
$ {F_{1, 3}} = \max \left( {0, {\mathit{\boldsymbol{W}}_{1, 3}} \otimes {F_{1, 2}}\left( \mathit{\boldsymbol{Y}} \right) + {\mathit{\boldsymbol{B}}_{1, 3}}} \right) $ | (4) |
式中,
2.2 细节补充网络
通过
$ {F_{2, 1}} = \max \left( {0, {\mathit{\boldsymbol{W}}_{2, 1}} \otimes \mathit{\boldsymbol{Y}} + {\mathit{\boldsymbol{B}}_{2, 1}}} \right) $ | (5) |
$ {F_{2, 2}} = \max \left( {0, {\mathit{\boldsymbol{W}}_{2, 2}} \otimes {F_{2, 1}}\left( \mathit{\boldsymbol{Y}} \right) + {\mathit{\boldsymbol{B}}_{2, 2}}} \right) $ | (6) |
式中,
2.3 重建网络
在传统的方法里,最终的图像结果是由各个特征图之间的加权平均得到的,譬如邻域嵌入方法就是一种使用加权平均的方法[19]。分别通过
$ {F_3} = {\mathit{\boldsymbol{W}}_3} \otimes \mathit{\boldsymbol{\tilde Y}} + {\mathit{\boldsymbol{B}}_3} $ | (7) |
式中,
2.4 训练网络
给定一个训练集
$ L\left( \mathit{\boldsymbol{\theta }} \right) = \frac{1}{{2N}}\sum\limits_{i = 1}^N {{{\left\| {F\left( {{\mathit{\boldsymbol{Y}}_i}, \mathit{\boldsymbol{\theta }}} \right) - {\mathit{\boldsymbol{X}}_i}} \right\|}^2}} $ | (8) |
使用
$ {\rm{PSNR}} = 10 \times \lg \left( {\frac{{{{\left( {{2^n} - 1} \right)}^2}}}{{{\rm{MSE}}}}} \right) $ | (9) |
可以看出,当
3 实验过程和结果
本节详细介绍本文实验数据的获取, 具体参数是如何确定的和模型是如何训练的。最后还将会定量的分析本文提出的方法结果和其他代表性方法结果之间的对比与展示。
3.1 数据集
3.2 训练过程
首先要将低分辨率图像从RGB颜色空间转换到YCrCb色彩空间。由于人类视觉相比于颜色对于亮度更加敏感[37],并且在SRCNN中已经实验证明仅对
由于本文的细节补充网络在特征提取时分成了
在
3.3 实验分析与对比
本文将分别从主观和客观的角度,分别的展示本文提出的方法的有效性以及与其他主流方法之间的对比展示。
如图 3所示,在
为了验证本文方法所得到结果的质量,以定性和定量的分析作为基准,对比了Bicubic,SC[20],KSVD[36],NE+NNLS[35],NE+LLE[19],ANR[22]等方法在
表 1
在Set5数据集下,本文方法与其他方法的
Table 1
Set5 | SCALE | BICUBIC | SC[10] | KSVD[19] | NE+NNLS[18] | NE+LLE[9] | ANR[12] | 本文 |
baby | 3 | 33.91 | 34.29 | 35.08 | 34.77 | 35.06 | 35.13 | 35.08 |
bird | 3 | 32.58 | 34.11 | 34.57 | 34.26 | 34.56 | 34.60 | 35.45 |
butterfly | 3 | 24.04 | 25.58 | 25.94 | 25.61 | 25.75 | 25.90 | 28.92 |
head | 3 | 32.88 | 33.17 | 33.56 | 33.45 | 33.60 | 33.63 | 33.71 |
woman | 3 | 28.56 | 29.94 | 30.37 | 29.89 | 30.22 | 30.33 | 31.41 |
平均 | 3 | 30.39 | 31.42 | 31.90 | 31.60 | 31.84 | 31.92 | 32.92 |
注:粗体标记数字表示最佳效果, 斜体标记则表示次佳效果。 |
表 2
在Set14数据集下, 本文的方法与其他方法
Table 2
Set14 | SCALE | BICUBIC | SC[20] | KSVD[36] | NE+NNLS[35] | NE+LLE[19] | ANR[22] | 本文 |
baboon | 3 | 23.21 | 23.47 | 23.52 | 23.49 | 23.55 | 23.56 | 23.68 |
barbara | 3 | 26.25 | 26.39 | 26.76 | 26.67 | 26.74 | 26.69 | 26.48 |
bridge | 3 | 24.40 | 24.82 | 25.02 | 24.86 | 24.98 | 25.01 | 25.25 |
coastguard | 3 | 26.55 | 27.02 | 27.15 | 27.00 | 27.07 | 27.08 | 27.33 |
comic | 3 | 23.12 | 23.90 | 23.96 | 23.83 | 23.98 | 24.04 | 24.73 |
face | 3 | 32.82 | 33.11 | 33.53 | 33.45 | 33.56 | 33.62 | 33.73 |
flowers | 3 | 27.23 | 28.25 | 28.42 | 28.21 | 28.30 | 28.49 | 29.49 |
foreman | 3 | 31.18 | 32.64 | 33.19 | 32.87 | 33.21 | 33.21 | 34.06 |
Lena | 3 | 31.68 | 32.64 | 33.00 | 32.82 | 33.01 | 33.08 | 33.69 |
man | 3 | 27.01 | 27.76 | 27.90 | 27.20 | 27.87 | 27.92 | 28.46 |
monarch | 3 | 29.43 | 30.71 | 31.10 | 30.76 | 30.95 | 31.09 | 33.63 |
pepper | 3 | 32.39 | 33.32 | 34.07 | 33.56 | 33.80 | 33.82 | 34.70 |
ppt3 | 3 | 23.71 | 24.98 | 25.23 | 24.81 | 24.94 | 25.03 | 27.06 |
zebra | 3 | 26.63 | 27.95 | 28.49 | 28.12 | 28.31 | 28.43 | 28.83 |
平均 | 3 | 27.54 | 28.31 | 28.67 | 28.44 | 28.60 | 28.65 | 29.37 |
注:粗体标记数字表示最佳效果, 斜体标记则表示次佳效果。 |
表 3
在Set5和Set14数据集, 本文方法与其他方法在
Table 3
The average results of
为了验证细节补充网络的有效性,对比了在缺少
表 4
在Set5数据集上, 无补充网络的
Table 4
The
/dB | |||
SET5 | SCALE | 本文 | |
baby | 3 | 35.03 | 35.08 |
bird | 3 | 35.39 | 35.45 |
butterfly | 3 | 28.81 | 28.92 |
head | 3 | 33.70 | 33.71 |
woman | 3 | 31.27 | 31.41 |
实验结果证明了相比于Bicubic,SC[20],KSVD[36],NE+LLE[19],NE+NNLS[35],ANR[22],A+[23],RFL[38],SelEX[26]以及SRCNN[10]等方法,不论在
4 结论
本文提出了一种细节互补的卷积神经网络模型,通过使用细节补充的方法来对低分辨率图像进行重建工作。本文采用两种不同尺度的输入并且使用了不同大小的卷积器获得多个图像特征,综合地利用了低维特征和高维特征来补充恢复图像的细节。实验结果表明,本文方法在小样本以及较少的迭代次数下,能够有效地恢复图像的细节和纹理。相较于SRCNN[10]等方法,本文方法所获得的高分辨图像质量更高。但是本文也存在诸多不足之处,本文所实现的图像相较于非常深的神经网络,如文献[12],所得到的图像结果还有些不足,同时现有的细节互补网络的思想相对是比较简单的,在今后的工作中需要针对细节互补的特征融合方式再进一步深入地研究。此外在所需要的迭代次数虽然相对SRCNN等有所下降,但是依旧相对较高,在下一步的工作中也需要得到更好地解决。但是在同等网络深度的情况下,本文方法所能实现的效果是比较好的。同时在实验的过程中,对于网络的优化还是存在一些问题。在今后的工作中,可以考虑使用更深层的网络模型来获得不同层次的图像特征来更好地恢复图像细节,并且使用更好的网络优化方法来加快网络的训练速度和质量;如何更好地针对纹理细节密集区域的图像恢复重建。
参考文献
-
[1] Van Ouwerkerk J D. Image super-resolution survey[J]. Image and Vision Computing, 2006, 24(10): 1039–1052. [DOI:10.1016/j.imavis.2006.02.026]
-
[2] Irani M, Peleg S. Improving resolution by image registration[J]. CVGIP:Graphical Models and Image Processing, 1991, 53(3): 231–239. [DOI:10.1016/1049-9652(91)90045-L]
-
[3] Tian J, Ma K K. A survey on super-resolution imaging[J]. Signal, Image and Video Processing, 2011, 5(3): 329–342. [DOI:10.1007/s11760-010-0204-6]
-
[4] Li X, Orchard M T. New edge-directed interpolation[J]. IEEE Transactions on Image Processing, 2001, 10(10): 1521–1527. [DOI:10.1109/83.951537]
-
[5] Leu J G. Image enlargement based on a step edge model[J]. Pattern Recognition, 2000, 33(12): 2055–2073. [DOI:10.1016/S0031-3203(99)00184-3]
-
[6] Cha Y, Kim S. Edge-forming methods for color image zooming[J]. IEEE Transactions on Image Processing, 2006, 15(8): 2315–2323. [DOI:10.1109/TIP.2006.875182]
-
[7] Tai Y W, Liu S C, Brown M S, et al. Super resolution using edge prior and single image detail synthesis[C]//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, CA, USA: IEEE, 2010: 2400-2407. [DOI:10.1109/CVPR.2010.5539933]
-
[8] Zhang K B, Gao X B, Tao D C, et al. Single image super-resolution with multiscale similarity learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2013, 24(10): 1648–1659. [DOI:10.1109/TNNLS.2013.2262001]
-
[9] Sun J, Xu Z B, Shum H Y. Image super-resolution using gradient profile prior[C]//Proceedings of 2008 IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, AK, USA: IEEE, 2008: 1-8. [DOI:10.1109/CVPR.2008.4587659]
-
[10] Dong C, Loy C C, He K M, et al. Image super-resolution using deep convolutional networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(2): 295–307. [DOI:10.1109/TPAMI.2015.2439281]
-
[11] Ledig C, Theis L, Huszar F, et al. Photo-realistic single image super-resolution using a generative adversarial network[J]. Computer Vision and Pattern Recognition, 2016: 4681–4690. [DOI:10.1109/CVPR.2017.19]
-
[12] Kim J, Lee J K, Lee K M. Accurate image super-resolution using very deep convolutional networks[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 1646-1654. [DOI:10.1109/CVPR.2016.182]
-
[13] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 770-778. [DOI:10.1109/CVPR.2016.90]
-
[14] Freeman W T, Jones T R, Pasztor E C. Example-based super-resolution[J]. IEEE Computer Graphics and Applications, 2002, 22(2): 56–65. [DOI:10.1109/38.988747]
-
[15] Kim J, Lee J K, Lee K M. Accurate image super-resolution using very deep convolutional networks[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 1646-1654. [DOI:10.1109/CVPR.2016.182]
-
[16] Wang Z W, Liu D, Yang J C, et al. Deep networks for image super-resolution with sparse prior[J]. International Conference on Computer Vision, 2015: 370–378. [DOI:10.1109/ICCV.2015.50]
-
[17] Wang Q, Tang X O, Shum H. Patch based blind image super resolution[C]//Proceedings of the 10th IEEE International Conference on Computer Vision. Beijing, China: IEEE, 2005: 709-716. [DOI:10.1109/ICCV.2005.186]
-
[18] Lin Z C, Shum H Y. Fundamental limits of reconstruction-based super-resolution algorithms under local translation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(1): 83–97. [DOI:10.1109/TPAMI.2004.1261081]
-
[19] Chang H, Yeung D Y, Xiong Y M. Super-resolution through neighbor embedding[C]//Proceedings of 2004 Computer Society Conference on Computer Vision and Pattern Recognition. Washington DC, USA: IEEE, 2004: I-275-I-282. [DOI:10.1109/CVPR.2004.1315043]
-
[20] Yang J C, Wright J, Huang T S, et al. Image super-resolution via sparse representation[J]. IEEE Transactions on Image Processing, 2010, 19(11): 2861–2873. [DOI:10.1109/TIP.2010.2050625]
-
[21] Dong W S, Zhang L, Shi G M, et al. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization[J]. IEEE Transactions on Image Processing, 2011, 20(7): 1838–1857. [DOI:10.1109/TIP.2011.2108306]
-
[22] Timofte R, De V, Van Gool L. Anchored neighborhood regression for fast example-based super-resolution[C]//Proceedings of 2013 IEEE International Conference on Computer Vision. Sydney, NSW, Australia: IEEE, 2013: 1920-1927. [DOI:10.1109/ICCV.2013.241]
-
[23] Timofte R, De Smet V, Van Gool L. A+: adjusted anchored neighborhood regression for fast super-resolution[C]//Computer Vision——ACCV 2014. Cham: Springer, 2015: 111-126. [DOI:10.1007/978-3-319-16817-3_8]
-
[24] Glasner D, Bagon S, Irani M. Super-resolution from a single image[C]//Proceedings of the IEEE 12th International Conference on Computer Vision. Kyoto, Japan: IEEE, 2009: 349-356. [DOI:10.1109/ICCV.2009.5459271]
-
[25] Freedman G, Fattal R. Image and video upscaling from local self-examples[J]. ACM Transactions on Graphics, 2011, 30(2): 12. [DOI:10.1145/1944846.1944852]
-
[26] Huang J B, Singh A, Ahuja N. Single image super-resolution from transformed self-exemplars[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 5197-5206. [DOI:10.1109/CVPR.2015.7299156]
-
[27] Kim J, Lee J K, Lee K M. Deeply-recursive convolutional network for image super-resolution[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 1637-1645. [DOI:10.1109/CVPR.2016.181]
-
[28] Keys R. Cubic convolution interpolation for digital image processing[J]. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1981, 29(6): 1153–1160. [DOI:10.1109/TASSP.1981.1163711]
-
[29] Bertasius G, Shi J B, Torresani L. DeepEdge: a multi-scale bifurcated deep network for top-down contour detection[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 4380-4389. [DOI:10.1109/CVPR.2015.7299067]
-
[30] Nair V, Hinton G E. Rectified linear units improve restricted boltzmann machines[C]//Proceedings of the 27th International Conference on Machine Learning. Haifa, Israel: ACM, 2010: 807-814.
-
[31] Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks[C]//Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. Fort Lauderdale, USA: [s. n. ], 2011: 315-323.
-
[32] Suetake N, Sakano M, Uchino E. Image super-resolution based on local self-similarity[J]. Optical Review, 2008, 15(1): 26–30. [DOI:10.1007/s10043-008-0005-0]
-
[33] Deng J, Dong W, Socher R, et al. ImageNet: a large-scale hierarchical image database[C]//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL, USA: IEEE, 2009: 248-255. [DOI:10.1109/CVPR.2009.5206848]
-
[34] Sugano Y, Matsushita Y, Sato Y, et al. Graph-based joint clustering of fixations and visual entities[J]. ACM Transactions on Applied Perception, 2013, 10(2): 10.
-
[35] Bevilacqua M, Roumy A, Guillemot C, et al. Low-complexity single-image super-resolution based on nonnegative neighbor embedding[C]//Proceedings of British Machine Vision Conference. Surrey, UK: BMVC, 2012.
-
[36] Zeyde R, Elad M, Protter M. On single image scale-up using sparse-representations[C]//Proceedings of the 7th International Conference on Curves and Surfaces. Avignon, France: Springer-Verlag, 2010: 711-730. [DOI:10.1007/978-3-642-27413-8_47]
-
[37] Huang J B, Singh A, Ahuja N. Single image super-resolution from transformed self-exemplars[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 5197-5206. [DOI:10.1109/CVPR.2015.7299156]
-
[38] Schulter S, Leistner C, Bischof H. Fast and accurate image upscaling with super-resolution forests[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 3791-3799. [DOI:10.1109/CVPR.2015.7299003]