Current Issue Cover

郭冬升1,2, 顾肇瑞1, 郑冰1, 董军宇1, 郑海永1(1.中国海洋大学信息科学与工程学部, 青岛 266100;2.山东浪潮科学研究院有限公司, 济南 250101)

摘 要
目的 图像内补与外推可看做根据已知区域绘制未知区域的问题,是计算机视觉领域研究热点。近年来,深度神经网络成为解决内补与外推问题的主流方法。然而,当前解决方法多分别对待内补与外推问题,导致二者难以统一处理;且模型多采用卷积神经网络(convolutional neural network,CNN)构建,受到视野局部性限制,较难绘制远距离内容。针对这两个问题,本文按照分而治之思想联合CNN与Transformer构建深度神经网络,提出图像内补与外推统一处理框架及模型。方法 将内补与外推问题的解决过程分解为“表征、预测、合成”3个部分,表征与合成采用CNN完成,充分利用其局部相关性进行图像到特征映射和特征到图像重建;核心预测由Transformer实现,充分发挥其强大的全局上下文关系建模能力,并提出掩膜自增策略迭代预测特征,降低Transformer同时预测大范围未知区域特征的难度;最后引入对抗学习提升绘制图像逼真度。结果 实验给出在多种数据集下内补与外推对比评测,结果显示本文方法各项性能指标均超越对比方法。通过消融实验发现,模型相比采用非分解方式具有更佳表现,说明分而治之思路功效显著。此外,对掩膜自增策略进行详细的实验分析,表明迭代预测方法可有效提升绘制能力。最后,探究了Transformer关键结构参数对模型性能的影响。结论 本文提出一种迭代预测统一框架解决图像内补与外推问题,相较对比方法性能更佳,并且各部分设计对性能提升均有贡献,显示了迭代预测统一框架及方法在图像内补与外推问题上的应用价值与潜力。
Unified framework with iterative prediction for image inpainting and outpainting

Guo Dongsheng1,2, Gu Zhaorui1, Zheng Bing1, Dong Junyu1, Zheng Haiyong1(1.Faculty of Information Science and Engineering, Ocean University of China, Qingdao 266100, China;2.Inspur Academy of Science and Technology, Jinan 250101, China)

Objective Image inpainting and outpainting tasks are significant challenges in the field of computer vision.They involve the filling of unknown regions in an image on the basis of information available in known regions.With its advancements,deep learning has become the mainstream approach for dealing with these tasks.However,existing solutions frequently regard inpainting and outpainting as separate problems,and thus,they lack the ability to adapt seamlessly between the two.Furthermore,convolutional neural networks(CNNs) are commonly used in these methods,but their limitation in capturing long-range content due to locality poses challenges.To address these issues,this study proposes a unified framework that combines CNN and Transformer models on the basis of a divide-and-conquer strategy,aiming to deal with image inpainting and outpainting effectively.Method Our proposed approach consists of three stages:representation,prediction,and synthesis.In the representation stage,CNNs are employed to map the input images to a set of meaningful features.This step leverages the local information processing capability of CNNs and enables the extraction of relevant features from the known regions of an image.We use a CNN encoder that incorporates partial convolutions and pixel normalization to reduce the introduction of irrelevant information from unknown regions.The extracted features obtained are then passed to the prediction stage.In the prediction stage,we utilize the Transformer architecture,which excels in modeling global context,to generate predictions for the unknown regions of an image.The Transformer has been proven to be highly effective in capturing long-range dependencies and contextual information in various domains,such as natural language processing.By incorporating a Transformer,we aim to enhance the model's ability to predict accurate and coherent content for inpainting and outpainting tasks.To address the challenge of predicting features for large-range unknown regions in parallel,we introduce a mask growth strategy.This strategy facilitates iterative feature prediction,wherein the model progressively predicts features for larger regions by gradually expanding the inpainting or outpainting task.This iterative process helps the model refine its predictions and capture more related contextual information,leading to improved results.Finally,we reconstruct the complete image in the synthesis stage by combining the predicted features with the known features from the representation stage.This synthesis aims to generate visually appealing and realistic results by leveraging the strengths of a CNN decoder that consists of multiple convolution residual blocks.Upsampling intervals are utilized,reducing the difficulty of model optimization.Result To evaluate the effectiveness of our proposed method,we conduct comprehensive experiments on diverse datasets that encompass objects and scenes for image inpainting and outpainting tasks.We compare our approach with state-of-the-art methods and utilize various evaluation metrics,including structural similarity index measure,peak signal-to-noise ratio,and perceptual quality metrics.The experimental results demonstrate that our unified framework surpasses existing methods across all evaluation metrics,demonstrating its superior performance.The combination of CNNs and a Transformer allows our model to capture local details and long-range dependencies,resulting in more accurate and visually appealing inpainting and outpainting results.In addition,ablation studies are conducted to confirm the effectiveness of each component of our method,including the framework structure and the mask growth strategy.Through ablation experiments,all three stages are confirmed to contribute to performance improvement,highlighting the applicability of our method.Furthermore,we empirically investigate the effect of the head and layer numbers of the Transformer model on overall performance,revealing that appropriate numbers of iterations,Transformer heads,and Transformer layers can further enhance the framework's performance.Conclusion This study introduces an iterative prediction unified framework for addressing image inpainting and outpainting challenges.Our proposed method outperforms existing approaches in terms of performance,with each aspect of the design contributing to overall improvement.The combination of CNNs and a Transformer enables our model to capture the local and global contexts,leading to more accurate and visually coherent image inpainting and outpainting results.These findings underscore the practical value and potential of an iterative prediction unified framework and method in the field of image inpainting and outpainting.Future research directions include exploring the application of our framework to other related tasks and further optimizing the model architecture for enhanced efficiency and scalability.Moreover,an important aspect that can be explored to enhance our proposed framework is the integration of self-supervised learning techniques with large-scale datasets.This step can potentially improve the robustness and generalization capability of our model for image inpainting and outpainting tasks.