目的 现有基于对抗图像的隐写算法大多只能针对一种隐写分析器设计对抗图像，且无法抵御隐写分析残差网络（steganalysis residual network，SRNet）、Zhu-Net等最新基于卷积神经网络隐写分析器的检测。针对这一现状，提出了一种联合多重对抗与通道注意力的高安全性图像隐写方法。方法 采用基于U-Net结构的生成对抗网络生成对抗样本图像，利用对抗网络的自学习特性实现多重对抗隐写网络参数迭代优化，并通过针对多种隐写分析算法的对抗训练，生成更适合内容隐写的载体图像。同时，通过在生成器中添加多个轻量级通道注意力模块，自适应调整对抗噪声在原始图像中的分布，提高生成对抗图像的抗隐写分析能力。其次，设计基于多重判别损失和均方误差损失相结合的动态加权组合方案，进一步增强对抗图像质量，并保障网络快速稳定收敛。结果 实验在BOSS Base1.01数据集上与当前主流的四种方法进行了比较，在使用原始隐写图像训练后，相比于基于 U-Net 结构的生成式多重对抗隐写算法等其他四种方法，使得当前性能优异的五种隐写分析器平均判别准确率降低了1.6%；在使用对抗图像和增强隐写图像再训练后，相比于其他四种方法，仍使得当前性能优异的五种隐写分析器平均判别准确率降低了6.8%。同时也对对抗图像质量进行分析，基于测试集生成的2000张对抗图像的平均峰值信噪比（peak signal-to-noise ratio，PSNR）可达到39.9251db，实验结果证明本文提出的隐写网络极大提升了隐写算法的安全性。结论 本方法在隐写算法安全性领域取得了领先当前其他优秀方法的性能，且生成的对抗图像具有很高的视觉质量。
Multiple competition and channel attention combined high-security image steganography
Ma Bin, Li Kun1, Xu Jian2, Wang Chunpeng3, Li Jian4, Zhang Liwei5(1.Qilu University of Technology （Shandong Academy of Sciences）;2.Shandong University of Finance and Economics;3.Qilu University of Technology Shandong Academy of Sciences;4.School of Cyber Security,Qilu University of Technology Shandong Academy of Sciences;5.Integrated Electronic Systems Lab Co,Ltd)
Objective The advancement of current steganographic techniques is facing a lot of challenges. The method of modifying the original image to hide the secret information is traceable, rendering it susceptible to detection by steganalyzers. Although the coverless steganographic method improves the security of steganography, it has limitations such as small embedding capacity, large image database, and difficulty in extracting secret information. Additionally, the cover image generative steganography method produces small and unnatural generated images. The introduction of adversarial examples provides a new approach to address these limitations by adding subtle perturbations to the original image to form an adversarial image that is not visually distinguishable and causes wrong classification results to be output with high confidence, which enhances the security of image-steganography. However, most existing steganographic algorithms based on adversarial examples can only design adversarial samples for one steganalyzer, making them vulnerable to the latest convolutional neural network-based steganalyzers such as SRNet and Zhu-Net. To overcome this, a multiple competition and channel attention combined high-security image steganography method is proposed in this paper. Method In the proposed method, we generate the adversarial noise V by the generator G, which employs the U-Net architecture with added channel-attention modules. Subsequently, the adversarial noise V is added to the original image X to obtain the adversarial image. To generate high-quality and semantically meaningful adversarial images, the pixel space minimum mean square error loss MSE_loss is adopted to train the generator network G. Then, we proceed by generating the stego-image from the original image X using the steganography network (SN) and inputting the original image X and its corresponding stego-image into the steganalysis optimization network (SON) to optimize its parameters. Moreover, we build the multiple steganalysis adversarial networks (SAN) to discriminate the original image X and its adversarial image and assign different scores to the adversarial and original images, providing multiple discriminant losses SDO_loss1. Furthermore, we embed secret messages into the adversarial image through the steganography network (SN) to generate the enhanced stego-image. The adversarial image and the enhanced stego-image are reinput into the optimized multiple steganalyzers to improve the anti-steganalysis performance of the adversarial image. The steganalysis adversarial network (SAN) evaluates the data-hiding capability of the adversarial image and provides multiple discriminant losses SDO_loss2. Additionally, to improve the image quality of the adversarial image and its anti-steganalysis ability, the weighted superposition of the MSE_loss, the multiple steganalysis discrimination losses SDO_loss1 and SDO_loss2 is employed as the cumulative loss function of generator G. Finally, the proposed method enables fast and stable network convergence as well as high stego-image visual quality and anti-steganalysis ability. Result In order to improve the anti-steganalysis ability of adversarial images, we initially selected four high-performance deep-learning steganalyzers, namely Xu-Net, Ye-Net, SRNet, and Zhu-Net, for adversarial training at the same time. However, Conducting experiments with four steganalysis networks simultaneously may lead to the number of model parameters increasing sharply, resulting in slow training speed and long training period. Furthermore, during the adversarial image generation process, each iteration of adversarial noise is generated according to the gradient feedback of the four steganalysis networks. A consequence of this approach is that the original image would be subject to excessive, unnecessary adversarial noise, leading to low-quality of adversarial images. To address this issue, we execute ablation experiments on different steganalysis networks employed in training. The objective of these experiments is to decrease model parameters, reduce training time, and ultimately enhance the quality of adversarial images to improve their anti-steganalysis capabilities. The role of the generator is to produce adversarial noise, which is subsequently incorporated into the original image to generate adversarial images. Different positions of adversarial noise in the original image can cause distinct perturbations to the steganalysis network, and the quality of the generated adversarial images can be influenced differently. To examine the effectiveness of the channel attention module, this paper introduces ablation experiments by altering the addition of the channel attention module at various positions of the generator. By conducting the ablation experiment, the parameters of the generator loss function are fine-tuned. Subsequently, we generated 2000 adversarial images using the proposed model and evaluated the quality of these images. The results revealed that the average peak signal-to-noise ratio (PSNR) value of the 2000 generated adversarial images is 39.9251 dB. Furthermore, over 99.55% of these images have a PSNR value greater than 39 dB, and more than 75% of the generated adversarial images have a PSNR value greater than 40 dB. Additionally, the average structural similarity index (SSIM) value of the generated adversarial images is 0.9625. Among these images, more than 69.85% have an SSIM value greater than 0.955, and more than 55.6% of the adversarial samples have an SSIM value greater than 0.960. These results indicate that the generated adversarial images exhibit high visual similarity compared to the original image. Finally, we conducted a comparative study of the proposed method with the current state-of-the-art methods on the BOSS Base1.01 dataset. The experiments are conducted on the BOSS Base1.01 dataset and compared with the current state-of-the-art methods. After training on the original steganographic images, the average accuracy of the five steganalysis methods decrease by 1.6% compared to the other four methods. After further training with adversarial images and enhanced steganographic images, the average accuracy of the five steganalysis methods decrease by 6.8% compared to the other four methods. The experimental results indicate that the proposed steganographic method significantly improves the security of the steganographic algorithm. Conclusion In this paper, we propose a steganographic architecture based on the U-Net framework with lightweight channel attention modules to generate adversarial images, which can resist multiple steganalysis networks. The experiment results demonstrate that the security and generalization of the algorithm we propose exceed those of the compared steganographic methods.