Current Issue Cover
联合多重对抗与通道注意力的高安全性图像隐写

马宾1,2, 李坤1,2, 徐健3, 王春鹏1,2, 李健1,2, 张立伟4(1.齐鲁工业大学 (山东省科学院) 网络空间安全学院, 济南 250353;2.山东省计算机网络重点实验室, 济南 250098;3.山东财经大学计算机科学与技术学院, 济南 250014;4.积成电子股份有限公司, 济南 250104)

摘 要
目的 现有基于对抗图像的隐写算法大多只能针对一种隐写分析器设计对抗图像,且无法抵御隐写分析残差网络(steganalysis residual network,SRNet)、Zhu-Net等最新基于卷积神经网络隐写分析器的检测。针对这一现状,提出了一种联合多重对抗与通道注意力的高安全性图像隐写方法。方法 采用基于U-Net结构的生成对抗网络生成对抗样本图像,利用对抗网络的自学习特性实现多重对抗隐写网络参数迭代优化,并通过针对多种隐写分析算法的对抗训练,生成更适合内容隐写的载体图像。同时,通过在生成器中添加多个轻量级通道注意力模块,自适应调整对抗噪声在原始图像中的分布,提高生成对抗图像的抗隐写分析能力。其次,设计基于多重判别损失和均方误差损失相结合的动态加权组合方案,进一步增强对抗图像质量,并保障网络快速稳定收敛。结果 实验在BOSS Base 1.01数据集上与当前主流的4种方法进行比较,在使用原始隐写图像训练后,相比于基于U-Net结构的生成式多重对抗隐写算法等其他4种方法,使得当前性能优异的5种隐写分析器平均判别准确率降低了1.6%;在使用对抗图像和增强隐写图像再训练后,相比其他4种方法,仍使得当前性能优异的5种隐写分析器平均判别准确率降低了6.8%。同时也对对抗图像质量进行分析,基于测试集生成的2 000幅对抗图像的平均峰值信噪比(peak signal-tonoise ratio,PSNR)可达到39.925 1 dB,实验结果表明本文提出的隐写网络极大提升了隐写算法的安全性。结论 本文方法在隐写算法安全性领域取得了较优秀的性能,且生成的对抗图像具有很高的视觉质量。
关键词
High-security image steganography with the combination of multiple competition and channel attention

Ma Bin1,2, Li Kun1,2, Xu Jian3, Wang Chunpeng1,2, Li Jian1,2, Zhang Liwei4(1.School of Cyber Security, Qilu University of Technology(Shandong Academy of Sciences), Jinan 250353, China;2.Shandong Provincial Key Laboratory of Computer Networks, Jinan 250098, China;3.School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan 250014, China;4.Integrated Electronic Systems Lab Co., Ltd., Jinan 250104, China)

Abstract
Objective The advancement of current steganographic techniques has been facing many challenges.The method of modifying the original image to hide the secret information is traceable,rendering it susceptible to detection by steganalyzers.The coverless steganographic method improves the security of steganography.However,it has limitations,such as small embedding capacity,large image database,and difficulty extracting secret information.The cover image generative steganography method also produces small and unnatural generated images.Introducing adversarial examples provides a new approach to address these limitations by adding subtle perturbations to the original image to form an adversarial image that is not visually distinguishable and causes wrong classification results to be outputted with high confidence.Thus,the security of image steganography is enhanced.However,most existing steganographic algorithms based on adversarial examples can only design adversarial samples for one steganalyzer,making them vulnerable to the latest convolutional neural network-based steganalyzers,such as SRNet and Zhu-Net.In response to this problem,a high-security image steganography method with the combination of multiple competition and channel attention is proposed in this study.Method In the proposed method,we generate the adversarial noise V using the generator G,which employs the U-Net architecture with added channel-attention modules.Subsequently,the adversarial noise V is added to the original image X to obtain the adversarial image.The pixel space minimum mean square error loss MSE_loss is adopted to train the generator network G.Thus,high-quality and semantically meaningful adversarial images are generated.Then,we generate the stego image from the original image X using the steganography network(SN) and input the original image X and its corresponding stego image into the steganalysis optimization network to optimize its parameters.Moreover,we build multiple steganalysis adversarial networks(SANs) to discriminate the original image X and its adversarial image and assign different scores to the adversarial and original images,providing multiple discriminant losses SDO_loss1.Furthermore,we embed secret messages into the adversarial image through the SN to generate the enhanced stego image.The adversarial image and the enhanced stego image are reinput into the optimized multiple steganalyzers to improve the antisteganalysis performance of the adversarial image.The SAN evaluates the data-hiding capability of the adversarial image and provides multiple discriminant losses SDO_loss2.Additionally,the weighted superposition of the MSE_loss,namely,the multiple steganalysis discrimination losses SDO_loss1 and SDO_loss2,is employed as the cumulative loss function of generator G to improve the image quality of the adversarial image and its antisteganalysis ability.Finally,the proposed method enables fast and stable network convergence and high stego image visual quality and antisteganalysis ability.Result First,we select four highperformance deep-learning steganalyzers,namely,Xu-Net,Ye-Net,SRNet,and Zhu-Net,for simultaneous adversarial training to improve the antisteganalysis ability of adversarial images.However,simultaneously conducting experiments with four steganalysis networks may sharply increase the number of model parameters,resulting in slow training speed and long training period.Furthermore,each iteration of adversarial noise is generated according to the gradient feedback of the four steganalysis networks during the adversarial image generation process.A consequence of this approach is that the original image is subjected to excessive,unnecessary adversarial noise,leading to low-quality adversarial images.In response to this issue,we execute ablation experiments on different steganalysis networks employed in training.These experiments aim to decrease model parameters,reduce training time,and ultimately enhance the quality of adversarial images for their antisteganalysis capability improvement.The role of the generator is to produce adversarial noise,which is subsequently incorporated into the original image to generate adversarial images.Different positions of adversarial noise in the original image can cause distinct perturbations to the steganalysis network,and the quality of the generated adversarial images can be influenced differently.This study introduces ablation experiments by altering the addition of the channel attention module at various positions of the generator to examine the effectiveness of the channel attention module.The parameters of the generator loss function are fine-tuned by conducting the ablation experiment.Subsequently,we generate 2 000 adversarial images using the proposed model and evaluate the quality of these images.The results reveal that the average peak signalto-noise ratio(PSNR) value of the 2 000 generated adversarial images is 39.925 1 dB.Furthermore,more than 99.55% of these images have a PSNR value greater than 39 dB,and more than 75% of the generated adversarial images have a PSNR value greater than 40 dB.Additionally,the average structural similarity index measure(SSIM) value of the generated adversarial images is 0.962 5.Among these images,more than 69.85% have an SSIM value greater than 0.955,and more than 55.6% of the adversarial samples have an SSIM value greater than 0.960.These results indicate that compared with the original images,the generated adversarial images exhibit high visual similarity.Finally,we conduct a comparative study of the proposed method with the current state-of-the-art methods on the BOSS Base 1.01 dataset.The experiments are conducted on the BOSS Base 1.01 dataset,and the results are compared with those of the current state-of-the-art methods.Compared with the four methods,the five steganalysis methods show decreased average accuracy by 1.6% after training on the original steganographic images.Compared with other four methods,the five steganalysis methods show decreased average accuracy by 6.8% after further training with adversarial images and enhanced steganographic images.The experimental results indicate that the proposed steganographic method significantly improves the security of the steganographic algorithm.Conclusion In this study,we propose a steganographic architecture based on the U-Net framework with lightweight channel attention modules to generate adversarial images,which can resist multiple steganalysis networks.The experiment results demonstrate that the security and generalization of the algorithm we propose exceed those of the compared steganographic methods.
Keywords

订阅号|日报