密集子区域切割的任意方向舰船快速检测
Fast detection algorithm for ship in arbitrary direction with dense subregion cutting
- 2021年26卷第3期 页码:654-662
收稿:2020-04-02,
修回:2020-6-22,
录用:2020-6-29,
纸质出版:2021-03-16
DOI: 10.11834/jig.200111
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-04-02,
修回:2020-6-22,
录用:2020-6-29,
纸质出版:2021-03-16
移动端阅览
目的
2
遥感图像上任意方向舰船目标的检测,是给出舰船在图像上的最小外切矩形边界框。基于双阶段深度网络的任意方向舰船检测算法速度较慢;基于单阶段深度网络的任意方向舰船检测算法速度较快,但由于舰船具有较大长宽比的形态特点,导致虚警率较高。为了降低单阶段目标检测的虚警率,进一步提升检测速度,针对舰船目标的形态特点,提出了基于密集子区域切割的快速检测算法。
方法
2
沿长轴方向,将舰船整体密集切割为若干个包含在正方形标注框内的局部子区域,确保标注框内最佳的子区域面积有效占比,保证核心检测网络的泛化能力;以子区域为检测目标,训练核心网络,在训练过程对重叠子区域进行整合;基于子图分割将检测得到的子区域进行合并,进而估计方向角度等关键舰船目标参数。其中采用子区域合并后处理替代了非极大值抑制后处理,保证了检测速度。
结果
2
在HRSC2016(high resolution ship collections)实测数据集上,与最新的改进YOLOv3(you only look once)、RRCNN (rotated region convolutional neural network)、RRPN (rotation region proposal networks)、R-DFPN-3(rotation dense feature pyramid network)和R-DFPN-4等5种算法进行了比较,相较于检测精度最高的R-DFPN-4对照算法,本文算法的mAP (mean average precision)(IOU (inter section over union)=0.5)值提高了1.9%,平均耗时降低了57.9%;相较于检测速度最快的改进YOLOv3对照算法,本文算法的mAP (IOU=0.5)值提高了3.6%,平均耗时降低了31.4%。
结论
2
本文所提出的任意方向舰船检测算法,结合了舰船目标的形态特点,在检测精度与检测速度均优于当前主流任意方向舰船检测算法,检测速度有明显提升。
Objective
2
Ship detection based on remotely sensed images aims to locate ships
which is of great significance in national water surveillance and territorial security. The rectangular bounding boxes for target location in the typical deep learning method are usually in the horizontal-vertical direction
whereas the distribution of ships on remotely sensed images is arbitrarily oriented or in varying directions. For narrow and long ships with arbitrary directions
the vertical-horizontal bounding box is fairly rough. When the ship deviates from the vertical or horizontal direction
the bounding box is inaccurate
and the bounding box has many nonship pixels. If multiple ships are close to one another on the image
several ships may not be located because they are overlapped by the bounding boxes of the neighboring ships. Therefore
using a finer bounding box in detection is beneficial for detecting ship targets
and more precise ship positioning information is helpful for subsequent ship target recognition. For this reason
the classical deep-learning-based target detection is extended
and a finer minimum circumscribed rectangular bounding box is utilized to locate the ship target. Existing extended detection algorithms can be divided into two categories: one-stage detection and two-stage detection. One-stage detection directly outputs the target's location estimation
whereas two-stage detection classifies the proposed regions to eliminate the false targets. The disadvantage of two-stage detection is its slower speed. One-stage detection is faster
but its false alarm rate is higher for narrow and long ships. A fast detection algorithm based on dense sub-region segmentation is proposed according to the shape characteristics of ship targets to reduce the false alarm rate of one-stage detection and further improve the detection speed.
Method
2
The basic idea of our algorithm is to segment a ship into several sub-regions on which detection and combination are carried out
according to the long and narrow shape characteristic of ships. First
the whole ship is intensively segmented along its long axis direction into several local sub-regions contained in square annotation boxes to maximize the proportion of the pixel area belonging to the ship
namely
the effective area ratio in every annotation box.The influence of background noise on a sub-region annotation box could be suppressed
and the reliable generalization ability of the sub-regions detection network is obtained. The multi-resolution structure is applied to the core detection network that contains three output branches from coarse resolution to fine resolution. The density of sub-region segmentation is estimated according to the minimum spatial compression ratio of the output branches to ensure that the sub-regions of the same ship are connected in each output branch. Second
the core sub-region detection network is trained
and several overlapping sub-regions in the coarse branches are reorganized during training. In the output layer with a finer resolution
the spatially adjacent sub-regions may be mapped to the same point in the output grid because the sub-regions are densely distributed. This process is called sub-region overlapping. Each point in the output grid can only correspond to one sub-region target at most; thus
these sub-regions should be reorganized into a new pseudo sub-region. The center point of the pseudo sub-region is the average value of the center points of the original sub-regions
and the size of the pseudo sub-region is consistent with that of the original sub-regions. With different resolutions in the output layer
the center points of the pseudo sub-region are slightly different
but the overall difference is not large. Lastly
the detected sub-regions are merged based on the subgraph segmentation method. The whole remotely sensed image is modeled as a graph
where each detected sub-region is recognized as a single node. The connectivity between every two sub-regions is constructed according to their spatial distance and size difference. The sub-graph segmentation is clustered into sub-regions belonging to the same ship. Based on the spatial distribution of the clustered sub-regions
the key parameters of the corresponding ship such as length
width
and rotation angle are estimated. Compared with conventional deep learning target detection methods
the core detection network structure of the proposed algorithm remains unchanged
and the post processing of sub-region merging replaces common non maximum suppression post processing.
Result
2
Our algorithm is compared with five state-of-the-art detection algorithms
namely
improved YOLOv3(you only look once)
RRCNN(rotated region convolutional neural network)
RRPN(rotation region proposal netwrok)
R-DFPN-3(rotation dense feature pyramid network)
and R-DFPN-4 on the HRSC2016(high resolution ship collections) dataset. The improved YOLOv3 belongs to one-stage detection
and the four other algorithms belong to two-stage detection. The quantitative evaluation metrics include mean average precision(mAP) and mean consuming time(mCT). Experiment results show that our algorithm outperforms all other algorithms on the HRSC2016 dataset. Compared with the result of R-DFPN-4 with the highest detection accuracy in comparison algorithms
mAP (i.e.
higher is better) increases by 1.9%
and mCT(i.e.
less is better) decreases by 57.9%. Compared with the result of improved YOLOv3 with the fastest detection speed in comparison algorithms
mAP increases by 3.6%
and mCT decreases by 31.4%. The running speed of our algorithm and the conventional YOLOv3 algorithm are further analyzed and compared. The core detection network applied to our algorithm is the same as that of the conventional YOLOv3 algorithm; thus
the running speed differs only in the post processing phase. The sub-region merging of our algorithm takes about 11 ms
and the nom-maximum-suppression(NMS) of conventional YOLOv3 takes approximately 5 ms on the HRSC2016 dataset. Compared with the conventional YOLOv3 algorithm
our algorithm can obtain finer positioning information for the rotating ships
and running time increases by only 9%.
Conclusion
2
A dense sub-region segmentation-based
arbitrarily oriented ship detection algorithm by using the long-and-narrow shape characteristics of the ship target is proposed. The experiment results show that our algorithm outperforms several state-of-the-art arbitrary-oriented ship detection algorithms
especially in detection speed.
Eldhuset K. 1996. An automatic ship and ship wake detection system for spaceborne SAR images in coastal regions. IEEE Trans on Geosciences and Remote Sensing, 34(4): 1010-1019[DOI:10.1109/36.508418]
Fingas M F and Brown C E. 2001. Review of ship detection from airborne platforms. Canadian Journal of Remote Sensing, 27(4): 379-385[DOI:10.1080/07038992.2001.10854880]
Girshick R. 2015. Fast R-CNN[EB/OL].[2020-03-02] . https://arxiv.org/pdf/1504.08083.pdf https://arxiv.org/pdf/1504.08083.pdf
Lin T Y, Goyal P, Girshick R, He K M and Dollár P.2017. Focal loss for dense object detection. Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2999-3007[ DOI: 10.1109/TPAMI.2018.2858826 http://dx.doi.org/10.1109/TPAMI.2018.2858826 ]
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C Y and Berg A C. 2016. SSD: single shot multibox detector//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, theNether lands: Springer: 21-37[ DOI: 10.1007/978-3-319-46448-0_2 http://dx.doi.org/10.1007/978-3-319-46448-0_2 ]
Liu Z K, Hu J G, Weng L B and Yang Y P. 2017a. Rotated region based CNN for ship detection//Proceedings of 2017 IEEE International Conference on Image Processing. Beijing, China: IEEE: 900-904[ DOI: 10.1109/ICIP.2017.8296411 http://dx.doi.org/10.1109/ICIP.2017.8296411 ]
Liu Z K, Yuan L, Weng L B and Yang Y Q. 2017b. A high resolution optical satellite image dataset for ship recognition and some new baselines//Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods. Porto, Portugal: ICPRAM: 324-331[ DOI: 10.5220/0006120603240331 http://dx.doi.org/10.5220/0006120603240331 ]
Ma J Q, Shao W Y, Ye H, Wang L, Wang H, Zheng Y B and Xue X Y. 2018. Arbitrary-oriented scene text detection via rotation proposals. IEEE Transactions on Multimedia, 20(11): 3111-3122[DOI:10.1109/TMM.2018.2818020]
Redmon J, Divvala S, Girshick R and Farhadi F. 2016. You only look once: unified, real-time object detection//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 779-788[ DOI: 10.1109/CVPR.2016.91 http://dx.doi.org/10.1109/CVPR.2016.91 ]
Redmon J and Farhadi A. 2018. YOLOv3: an incremental improvement[EB/OL].[2020-03-02] . https://arxiv.org/pdf/1804.02767.pdf https://arxiv.org/pdf/1804.02767.pdf
Ren S Q, He K M, Girshick R and Sun J. 2015. Faster R-CNN: towards real-time object detection with region proposal networks//Proceedings of Advances in Neural Information Processing Systems. Montreal, Canada: NIPS: 91-99
Wang C L, Bi F K, Zhang W P and Chen L. 2017. An intensity-space domain CFAR method for ship detection in HR SAR images. IEEE Geoscience and Remote Sensing Letters, 14(4): 529-533[DOI:10.1109/LGRS.2017.2654450]
Wu Z H, Li L and Gao Y M. 2019. Rotation convolution ensemble YOLOv3 model for ship detection in remote sensing images. Computer Engineering and Applications, 55(22): 146-151
吴止锾, 李磊, 高永明. 2019. 遥感图像舰船检测的旋转卷积集成YOLOv3模型. 计算机工程与应用, 55(22): 146-151[DOI:10.3778/j.issn.1002-8331.1902-0144]
Yang X, Sun H, Fu K, Yang J R, Sun X, Yan M L and Guo Z. 2018. Automatic ship detection in remote sensing images from google earth of complex scenes based on multiscale rotation dense feature pyramid networks. Remote Sensing, 10(1): #132[DOI:10.3390/rs10010132]
Yu Y D, Yang X B, Xiao S J and Lin J L. 2012. Automated ship detection from optical remote sensing images. Key Engineering Materials, 500: 785-791[DOI:10.4028/www.scientific.net/KEM.500.785]
Zhu C R, Zhou H, Wang R S and Guo J. 2010. A novel hierarchical method of ship detection from spaceborne optical image based on shape and texture features. IEEE Transactions on Geoscience and Remote Sensing, 48(9): 3446-3456[DOI:10.1109/TGRS.2010.2046330]
相关文章
相关作者
相关机构
京公网安备11010802024621