RTDNet:面向高分辨率卫星影像的赤潮探测网络
RTDNet: red tide detection network for high-resolution satellite images
- 2023年28卷第12期 页码:3911-3921
纸质出版日期: 2023-12-16
DOI: 10.11834/jig.221174
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2023-12-16 ,
移动端阅览
崔宾阁, 方喜, 路燕, 黄玲, 刘荣杰. 2023. RTDNet:面向高分辨率卫星影像的赤潮探测网络. 中国图象图形学报, 28(12):3911-3921
Cui Binge, Fang Xi, Lu Yan, Huang Ling, Liu Rongjie. 2023. RTDNet: red tide detection network for high-resolution satellite images. Journal of Image and Graphics, 28(12):3911-3921
目的
2
赤潮是一种常见的海洋生态灾害,严重威胁海洋生态系统安全。及时准确获取赤潮的发生和分布信息可以为赤潮的预警和防治提供有力支撑。然而,受混合像元和水环境要素影响,赤潮分布精细探测仍是挑战。针对赤潮边缘探测的难点,结合赤潮边缘高频特征学习与位置语义,提出了一种计算量小、精度高的网络模型RTDNet(red tide detection network)。
方法
2
针对赤潮边缘探测不准确的问题,设计了基于RIR(residual-in-residual)结构的网络,以提取赤潮边缘水体的高频特征;利用多感受野结构和坐标注意力机制捕获赤潮水体的位置语义信息,增强赤潮边缘水体的细节信息并抑制无用的特征。
结果
2
在GF1-WFV(Gaofen1 wide field of view)赤潮数据集上的实验结果表明,所提出的RTDNet模型赤潮探测效果不仅优于支持向量机(support vector machine,SVM)、U-Net、DeepLabv3+及HRNet(high-resolution network)等通用机器学习和深度学习模型,而且也优于赤潮指数法GF1_RI(Gaofen1 red tide index )以及赤潮探测专用深度学习模型RDU-Net(red tide detection U-Net),赤潮误提取、漏提取现象明显减少,F1分数在两幅测试图像上分别达到了0.905和0.898,相较于性能第2的模型DeepLabv3+提升了2%以上。而且,所提出的模型参数量小,仅有2.65 MB,约为DeepLabv3+的13%。
结论
2
面向赤潮探测提出一种基于RIR结构的赤潮深度学习探测模型,通过融合多感受野结构和注意力机制提升了赤潮边缘探测的精度和稳定性,同时有效降低了计算量。本文方法展现了较好的应用效果,可适用于不同高分辨率卫星影像的赤潮探测。
Objective
2
Red tide is a harmful ecological phenomenon in the marine ecosystem that seriously threatens the safety of the marine economy. The accurate detection of the occurrence and distribution area of a small-scale red tide can provide basic information for the prediction and early warning of this phenomenon. Red tide has a short duration and rapid change, and on-site observations can hardly meet the requirements for its timely and accurate detection. By contrast, remote sensing has become an important technology for red tide monitoring. However, the traditional method of exponential extraction based on spectral features is easily influenced by ocean background noise, and the threshold cannot be easily determined because the marginal watercolor of the red tide is not obvious. Deep-learning-based methods can extract red tide information end to end without setting the threshold manually yet treat low and high-frequency red tide information equally, thus hindering the representation ability of the convolutional neural network. To solve the problem of positioning and identifying small-scale red tide marginal waters, a semantic segmentation method for the remote sensing detection of small-scale red tide is proposed in this paper by combining the high-frequency feature learning of red tide with position semantics.
Method
2
The residual-in-residual (RIR) structure is used to extract the high-frequency characteristics of red tide marginal waters, and the residual branch is alternately composed of multiple residual groups and receptive fields. The residual group uses the coordinate attention and dynamic weight mechanisms to capture the position semantic information of red tide water bodies, while multi-receptive field structures are used to capture multi-scale information. A small-scale red tide detection network called RTDNet is then constructed to enhance the detailed information of red tide marginal waters and suppress useless features. In order to verify the validity of the model, experiments are conducted on the red tide dataset of GF1-WFV. Due to limitations in computing resources, the remote sensing images are cropped to 64 × 64 pixels, and data enhancement operations, such as flipping, translating, and rotating, are performed on the data. Through the above processing steps, a total of 1 050 samples are obtained. Adam is selected as the model optimizer with 0.000 1 learning rate, 2 batch size, 100 epoch rounds, and a binary cross-entropy loss function. The experiment is carried out under the Ubuntu 18.04 operating system with NVIDIA GeForce RTX 2080Ti GPU, and the network model is realized in Python 3.6 with the Keras 2.4.0 framework. The precision (P), recall (R), F1-score (F1), and intersection over union (IoU) of the model are comprehensively evaluated to quantitatively analyze its effects.
Result
2
Experimental results on the GF1-WFV red tide dataset show that RTDNet is superior to SVM, U-Net, DeepLabv3+, HRNet, the red tide band exponential method GF1_RI, RDU-Net, and other general or special red tide detection models in both the qualitative and quantitative aspects. Results from RTDNet are similar to the ground truth, and its red tide marginal water extraction effect is better than that of other models. This model also has much less instances of false extraction and missing extraction compared to the other models. For the quantitative results, the F1-score and IoU of RTDNet reach 0.905 and 0.898 and 0.827 and 0.815 on the two test images, respectively. Compared with those of the second-best-performing model DeepLabv3+, the F1-score of RTDNet is increased by more than 0.02, while its IoU is increased by more than 0.05. However, the number of model parameters in DeepLabv3+ is only 2.65 MB, which is 13% of RTDNet. An ablation experiment is also carried out, and the results verify that each module in RTDNet helps improve the effect of red tide detection. The visualization results of some feature maps across different stages of the network show the gradual refining process of the network to extract red tide.
Conclusion
2
This paper proposes a red tide small-scale remote sensing detection network model called RTDNet based on the residual-in-residual structure, multi-receptive field structure, and attention mechanism. This model effectively addresses the false and missing extractions caused by the inconspicuous watercolor at the edge of red tide, improves the accuracy and stability of red tide marginal water detection, and effectively reduces the calculation load. Experimental results show that RTDNet is superior to other methods and models in detecting small-scale red tide in remote sensing images. This method is suitable for remote sensing the accurate location and area extraction of early marine disasters (e.g., red tide, green tide, and golden tide) and has certain reference significance and applicability for other semantic segmentation tasks with fuzzy edges.
赤潮探测GF-1 WFV遥感影像语义分割残差网络注意力机制
red tide detectionGF-1 WFV remote sensing imagesemantic segmentationresidual networkattentional mechanisms
Ahn Y H and Shanmugam P. 2006. Detecting the red tide algal blooms from satellite ocean color observations in optically complex northeast-asia coastal waters. Remote Sensing of Environment, 103(4): 419-437 [DOI: 10.1016/j.rse.2006.04.007http://dx.doi.org/10.1016/j.rse.2006.04.007]
Chen L C, Papandreou G, Schroff F and Adam H. 2017. Rethinking atrous convolution for semantic image segmentation [EB/OL]. [2022-11-24]. https://arxiv.org/pdf/1706.05587.pdfhttps://arxiv.org/pdf/1706.05587.pdf
Chen L C, Zhu Y K, Papandreou G, Schroff F and Adam H. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 833-851 [DOI: 10.1007/978-3-030-01234-2_49http://dx.doi.org/10.1007/978-3-030-01234-2_49]
Cortes C and Vapnik V. 1995. Support-vector networks. Machine Learning, 20(3): 273-297 [DOI: 10.1007/BF00994018http://dx.doi.org/10.1007/BF00994018]
Dong R S, Ma Y Q, Liu Y and Li F Y. 2022. CRNet: class relation network for crop remote sensing image semantic segmentation. Journal of Image and Graphics, 27(11): 3382-3394
董荣胜, 马雨琪, 刘意, 李凤英. 2022. 加强类别关系的农作物遥感图像语义分割. 中国图象图形学报, 27(11): 3382-3394 [DOI: 10.11834/jig.210760http://dx.doi.org/10.11834/jig.210760]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 770-778 [DOI: 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
Hou Q B, Zhou D Q and Feng J S. 2021. Coordinate attention for efficient mobile network design//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 13708-13717 [DOI: 10.1109/CVPR46437.2021.01350http://dx.doi.org/10.1109/CVPR46437.2021.01350]
Kim S M, Shin J, Baek S and Ryu J H. 2019. U-Net convolutional neural network model for deep red tide learning using GOCI. Journal of Coastal Research, 90(S1): 302-309 [DOI: 10.2112/SI90-038.1http://dx.doi.org/10.2112/SI90-038.1]
Li H Y, Li C G, An J B and Ren J L. 2019. Attention mechanism improves CNN remote sensing image object detection. Journal of Image and Graphics, 24(8): 1400-1408
李红艳, 李春庚, 安居白, 任俊丽. 2019. 注意力机制改进卷积神经网络的遥感图像目标检测. 中国图象图形学报, 24(8): 1400-1408 [DOI: 10.11834/jig.180649http://dx.doi.org/10.11834/jig.180649]
Li J H, Xing Q G, Zheng X Y, Li L and Wang L L. 2022. Noctiluca scintillans red tide extraction method from uav images based on deep learning. Journal of Computer Applications, 42(9): 2969-2974
李敬虎, 邢前国, 郑向阳, 李琳, 王丽丽. 2022. 基于深度学习的无人机影像夜光藻赤潮提取方法. 计算机应用, 42(9): 2969-2974 [DOI: 10.11772/j.issn.1001-9081.2021071197http://dx.doi.org/10.11772/j.issn.1001-9081.2021071197]
Liu R J, Xiao Y F, Ma Y, Cui T W and An J B. 2022. Red tide detection based on high spatial resolution broad band optical satellite data. ISPRS Journal of Photogrammetry and Remote Sensing, 184: 131-147 [DOI: 10.1016/j.isprsjprs.2021.12.009http://dx.doi.org/10.1016/j.isprsjprs.2021.12.009]
Liu R J, Zhang J, Cui B G, Ma Y, Song P J and An J B. 2019. Red tide detection based on high spatial resolution broad band satellite data: a case study of GF-1. Journal of Coastal Research, 90(S1): 120-128 [DOI: 10.2112/si90-015.1http://dx.doi.org/10.2112/si90-015.1]
Liu S T, Huang D and Wang Y H. 2018. Receptive field block net for accurate and fast object detection//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 404-419 [DOI: 10.1007/978-3-030-01252-6_24http://dx.doi.org/10.1007/978-3-030-01252-6_24]
Lou X L and Hu C M. 2014. Diurnal changes of a harmful algal bloom in the East China Sea: observations from GOCI. Remote Sensing of Environment, 140: 562-572 [DOI: 10.1016/j.rse.2013.09.031http://dx.doi.org/10.1016/j.rse.2013.09.031]
Mao X M and Huang W G. 2003. Algorithms of multiband remote sensing for coastal red tide waters. Chinese Journal of Applied Ecology, 14(7): 1200-1202
毛显谋, 黄韦艮. 2003. 多波段卫星遥感海洋赤潮水华的方法研究. 应用生态学报, 14(7): 1200-1202 [DOI: 10.13287/j.1001-9332.2003.0269http://dx.doi.org/10.13287/j.1001-9332.2003.0269]
Pan X L, Jiang T, Zhang Z, Sui B K, Liu C X and Zhang L J. 2020. A new method for extracting laver culture carriers based on inaccurate supervised classification with FCN-CRF. Journal of Marine Science and Engineering, 8(4): #274 [DOI: 10.3390/jmse8040274http://dx.doi.org/10.3390/jmse8040274]
Rahman A F and Aslan A. 2016. Detecting red tide using spectral shapes//Proceedings of 2016 IEEE International Geoscience and Remote Sensing Symposium. Beijing, China: IEEE: 5856-5859 [DOI: 10.1109/IGARSS.2016.7730530http://dx.doi.org/10.1109/IGARSS.2016.7730530]
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241 [DOI: 10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Shin J, Jo Y H, Ryu J H, Khim B K and Kim S M. 2021. High spatial-resolution red tide detection in the southern coast of Korea using U-Net from PlanetScope imagery. Sensors, 21(13): #4447 [DOI: 10.3390/s21134447http://dx.doi.org/10.3390/s21134447]
Siswanto E, Ishizaka J, Tripathy S C and Miyamura K. 2013. Detection of harmful algal blooms of Karenia mikimotoi using MODIS measurements: a case study of Seto-Inland Sea, Japan. Remote Sensing of Environment, 129: 185-196 [DOI: 10.1016/j.rse.2012.11.003http://dx.doi.org/10.1016/j.rse.2012.11.003]
Song Y, Wang N, Ding Y, Xin L, Sun Q and Jiang T. 2021. Red tide information identification method based on GF-4 satellite remote sensing data —— A case study of Qinhuangdao sea area. Technology Innovation and Application, 11(34): 9-14
宋彦, 王宁, 丁一, 辛蕾, 孙青, 姜涛. 2021. 基于GF-4卫星遥感数据的赤潮信息识别方法 —— 以秦皇岛海域为例. 科技创新与应用, 11(34): 9-14 [DOI: 10.19981/j.CN23-1581/G3.2021.34.002http://dx.doi.org/10.19981/j.CN23-1581/G3.2021.34.002]
Sun J Q, Li Y F, Zhang W B and Liu P H. 2022. Dual-field feature fusion deep convolutional neural network based on discrete wavelet transformation. Computer Science, 49(6A): 434-440
孙洁琪, 李亚峰, 张文博, 刘鹏辉. 2022. 基于离散小波变换的双域特征融合深度卷积神经网络. 计算机科学, 49(6A): 434-440 [DOI: 10.11896/jsjkx.210900199http://dx.doi.org/10.11896/jsjkx.210900199]
Sun K, Xiao B, Liu D and Wang J D. 2019. Deep high-resolution representation learning for human pose estimation//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 5686-5696 [DOI10.1109/CVPR.2019.00584]
Xu Z Y, Zhou Y, Wang S X, Wang L T and Wang Z Q. 2021. U-Net for urban green space classification in Gaofen-2 remote sensing images. Journal of Image and Graphics, 26(3): 700-713
徐知宇, 周艺, 王世新, 王丽涛, 王振庆. 2021. 面向GF-2遥感影像的U-Net城市绿地分类. 中国图象图形学报, 26(3): 700-713 [DOI: 10.11834/jig.200052http://dx.doi.org/10.11834/jig.200052]
Zhai W K, Xu Z Z and Zhang J. 2016. Analysis on characteristics of red tide disaster in Hebei coastal waters. Marine Environmental Science, 35(2): 243-246, 251
翟伟康, 许自舟, 张健. 2016. 河北省近岸海域赤潮灾害特征分析. 海洋环境科学, 35(2): 243-246, 251 [DOI: 10.13634/j.cnki.mes.2016.02.015http://dx.doi.org/10.13634/j.cnki.mes.2016.02.015]
Zhang Y L, Li K P, Li K, Wang L C, Zhong B N and Fu Y. 2018. Image super-resolution using very deep residual channel attention networks//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 294-310 [DOI: 10.1007/978-3-030-01234-2_18http://dx.doi.org/10.1007/978-3-030-01234-2_18]
Zhao X, Liu R J, Ma Y, Xiao Y F, Ding J, Liu J Q and Wang Q B. 2022. Red tide detection method for HY-1D coastal zone imager based on U-Net convolutional neural network. Remote Sensing, 14(1): #88 [DOI: 10.3390/rs14010088http://dx.doi.org/10.3390/rs14010088]
Zhuo X. 2018. Research on the basic characteristics of red tide in Fuzhou coastal waters during the past 10 years. Marine Forecasts, 35(4): 34-40
卓鑫. 2018. 近十年福州沿海赤潮的基本特征研究. 海洋预报, 35(4): 34-40 [DOI: 10.11737/j.issn.1003-0239.2018.04.005http://dx.doi.org/10.11737/j.issn.1003-0239.2018.04.005]
相关作者
相关机构