结合Faster R-CNN的多类型火焰检测
Multi-type flame detection combined with Faster R-CNN
- 2019年24卷第1期 页码:73-83
收稿:2018-07-10,
修回:2018-8-10,
纸质出版:2019-01-16
DOI: 10.11834/jig.180430
移动端阅览

浏览全部资源
扫码关注微信
收稿:2018-07-10,
修回:2018-8-10,
纸质出版:2019-01-16
移动端阅览
目的
2
火焰检测可有效防止火灾的发生。针对目前火焰检测方法,传统图像处理技术的抗干扰能力差、泛化性不强,检测效果对数据波动比较敏感;机器学习方法需要根据不同的场景设定并提取合适火焰的特征,过程比较繁琐。为此提出一种基于Faster R-CNN的多类型火焰检测方法,避免复杂的人工特征提取工作,在面对复杂背景、光照强度变化和形态多样的火焰图像时依然保证较好的检测精度。
方法
2
基于深度学习的思想,利用卷积神经网络自动学习获取图像特征。首先,利用自建数据集构建视觉任务。根据火焰的尖角特性、直观形态和烟雾量等,将火焰类数据划分为单尖角火焰、多尖角火焰和无规则火焰3类。此外,通过深度网络特征可视化实验发现,人造光源与火焰在轮廓上具有一定的相似性,为此建立了人造光源圆形和方形两个数据集作为干扰项来保证检测模型的稳定性;然后,细化训练参数并调整预训练的卷积神经网络结构,改动分类层以满足特定视觉任务。将经过深度卷积神经网络中卷积层和池化层抽象得到的图像特征送入区域生成网络进行回归计算,利用迁移学习的策略得到每一类目标物体相应的探测器;最后,得到与视觉任务相关的目标检测模型,保存权重和偏置参数。并联各类目标物体的子探测器作为整体探测器使用,检测时输出各类探测器的分数,得分最高的视为正确检测项。
结果
2
首先,利用训练好的各探测器与相应测试集样本进行测试,然后,再利用各类目标物的测试集来测试其他类探测器的检测效果,以此证明各探测器之间的互异性。实验结果表明,各类探测器都具有较高的专一性,大大降低了误判的可能性,对于形变剧烈和复杂背景的火焰图像也具有良好的检测准确率。训练得到的检测模型在应对小目标、多目标、形态多样、复杂背景和光照变化等检测难度较大的情况时,均能获得很好的效果,测试集结果表明各类探测器的平均准确率提高了3.03% 8.78%不等。
结论
2
本文提出的火焰检测方法,通过挖掘火焰的直观形态特征,细分火焰类别,再利用深度卷积神经网络代替手动特征设置和提取过程,结合自建数据集和根据视觉任务修改的网络模型训练得到了检测效果良好的多类型火焰检测模型。利用深度学习的思想,避免了繁琐的人工特征提取工作,在得到较好的检测效果的同时,也保证了模型具有较强的抗干扰能力。本文为解决火焰检测问题提供了更加泛化和简洁的解决思路。
Objective
2
Flame detection can effectively prevent the occurrence of fire. Traditional image processing techniques for current flame detection methods have low anti-interference ability and generalization
and the detection effect is highly sensitive to data fluctuations. Machine learning must set and extract the characteristics of a suitable flame on the basis of different scenarios
and this process is complex. To avoid complex artificial feature extraction and ensure good detection accuracy in the presence of complex backgrounds
lighting changes
and various forms of flame images
a multi-type flame detection method based on Faster R-CNN was proposed.
Method
2
This method is based on deep learning and uses convolutional neural networks to automatically learn to acquire image features. First
visual tasks were established using self-built datasets. According to the sharp angle characteristics of the fire
the visual shape
and the amount of smoke
the flame data were divided into three types
namely
single point
multi-point
and shapeless flames. In addition
in-depth network feature visualization experiments revealed that the artificial light source and the flame have similarities in contour. Thus
two datasets of artificial light sources (circular and square) were established as interference items to ensure the stability of the detection model. Then
the training parameters were refined
and the pre-trained convolutional neural network structure was adjusted. The classification layer was modified to satisfy specific visual tasks. The image features abstracted by the convolutional and pooling layers in the deep convolutional neural network were sent to the region proposal network for regression calculation
and the corresponding detectors for each type of target object were obtained by transfer learning strategy. Finally
the target detection model based on the visual tasks was obtained
and the weights and bias parameters were saved. The sub-detectors of various target objects in parallel were used as the overall detector. The scores of various detectors were output during the detection
and the highest score was considered the correct detection item.
Result
2
First
the trained detectors and the corresponding test dataset were used for testing. Then
the test sets of various targets were utilized to test the detection effect of other types of detectors to prove the mutuality between detectors. The experiments demonstrated that all kinds of detectors had a high specificity
which greatly reduced the possibility of misjudgment
and good detection accuracy for flame images with sharp deformations and complex backgrounds. The detection model obtained through training achieved good results when dealing with difficult situations
such as small targets
multiple targets
various forms
complex backgrounds
and lighting changes. The results showed that the average accuracy of various types of detectors increased between 3.03% and 8.78%.
Conclusion
2
The proposed flame detection method subdivided the flame category by excavating the visual morphological characteristics of the flame and using the deep convolutional neural network instead of manual feature setting and extraction process. By combining self-built datasets and network models that were modified based on visual tasks
a multi-type flame detection model with good detection results was obtained. By using the concept of deep learning
the tedious artificial feature extraction is avoided
and a good detection effect
is achieved. In addition
the model has a strong anti-interference ability. This article provides a more general and concise solution to the problem of flame detection.
Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6):84-90.[DOI:10.1145/3065386]
Ren S Q, He K M, Girshick R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6):1137-1149.[DOI:10.1109/TPAMI.2016.2577031]
Geng Q T, Yu F H, Zhao H W, et al. New algorithm of flame detection based on color features[J]. Journal of Jilin University:Engineering and Technology Edition, 2014, 44(6):1787-1792.
耿庆田, 于繁华, 赵宏伟, 等.基于颜色特征的火焰检测新算法[J].吉林大学学报:工学版, 2014, 44(6):1787-1792.[DOI:10.13229/j.cnki.jdxbgxb201406038]
Geng Q T, Zhang J, Zhao H W, et al. Flame detection method based on the feature of motion trajectory[J]. Journal of Changchun Normal University, 2018, 37(2):35-38.
耿庆田, 张晶, 赵宏伟, 等.基于运动轨迹特征的火焰检测方法[J].长春师范大学学报, 2018, 37(2):35-38.
Wang T, Bu L P, Zhou Q F, et al. A new flame recognition model based on dispersion of color component[J]. Journal of Naval University of Engineering, 2016, 28(5):107-111.
王腾, 卜乐平, 周清锋, 等.一种基于颜色分量离散度的火焰识别模型[J].海军工程大学学报, 2016, 28(5):107-111.[DOI:10.7495/j.issn.1009-3486.2016.05.021]
Chen J Q, Zhang B, Song Y L. Flame recognition based on statistical RGB color model[J]. Journal of Jiangsu University of Science and Technology:Natural Science Edition, 2017, 31(2):178-184.
陈嘉卿, 张冰, 宋英磊.基于RGB统计颜色模型的火焰识别[J].江苏科技大学学报:自然科学版, 2017, 31(2):178-184.[DOI:10.3969/j.issn.1673-4807.2017.02.011]
Zhang X, Huang J F. Detecting flame in video in combination with LBP histogram and SVM[J]. Computer Applications and Software, 2016, 33(8):216-220.
张霞, 黄继风.结合LBP直方图和SVM的视频火焰检测[J].计算机应用与软件, 2016, 33(8):216-220.[DOI:10.3969/j.issn.1000-386x.2016.08.048]
Mao W T, Wang W P, Jiang M X, et al. Fast flame recognition approach based on local feature filtering[J]. Journal of Computer Application, 2016, 36(10):2907-2911.
毛文涛, 王文朋, 蒋梦雪, 等.基于局部特征过滤的快速火焰图像识别方法[J].计算机应用, 2016, 36(10):2907-2911.[DOI:10.11772/j.issn.1001-9081.2016.10.2907]
Duan S L, Gu C L. Rsearch on the detection method based on the optimized BP neural network for the visual fire flame recognition[J]. Journal of Changzhou University:Natural Science Edition, 2017, 29(2):65-70.
段锁林, 顾川林.基于BP神经网络视频火灾火焰检测方法[J].常州大学学报:自然科学版, 2017, 29(2):65-70.[DOI:10.3969/j.issn.2095-0411.2017.02.012]
Duan S L, Mao D. Research on fire flame image detection algorithm[J]. Computer Simulation, 2016, 33(2):393-398.
段锁林, 毛丹.关于火灾火焰图像检测算法研究[J].计算机仿真, 2016, 33(2):393-398.[DOI:10.3969/j.issn.1006-9348.2016.02.082]
Xi T Y, Qiu X B, Sun D Y et al. Fast fire flame recognition algorithm based on multi-feature logarithmic regression[J]. Journal of Computer Applications, 2017, 37(7):1989-1993.
席廷宇, 邱选兵, 孙冬远, 等.多特征量对数回归的火焰快速识别算法[J].计算机应用, 2017, 37(7):1989-1993.[DOI:10.11772/j.issn.1001-9081.2017.07.1989]
Feng X Y, Mei W, Hu D S. Aerial target detection based on improved Faster R-CNN[J]. Acta Optica Sinica, 2018, 38(6):250-258.
冯小雨, 梅卫, 胡大帅.基于改进Faster R-CNN的空中目标检测[J].光学学报, 2018, 38(6):250-258.[DOI:10.3788/aos201838.0615004]
Cao S Y, Liu Y H, Li X Z. Vehicle detection method based on fast R-CNN[J]. Journal of Image and Graphics, 2017, 22(5):671-677.
曹诗雨, 刘跃虎, 李辛昭.基于Fast R-CNN的车辆目标检测[J].中国图象图形学报, 2017, 22(5):671-677.[DOI:10.11834/jig.160600]
Wang L, Zhang H H. Application of Faster R-CNN model in vehicle detection[J]. Journal of Computer Applications, 2018, 38(3):666-670.
王林, 张鹤鹤. Faster R-CNN模型在车辆检测中的应用[J].计算机应用, 2018, 38(3):666-670.[DOI:10.11772/j.issn.1001-9081.2017082025]
Wu X F, Zhang J X, Xu J X. Hand gesture recognition algorithm based on Faster R-CNN[J]. Journal ofComputer-Aided Design&Computer Graphics, 2018, 30(3):468-476.
吴晓凤, 张江鑫, 徐欣晨.基于Faster R-CNN的手势识别算法[J].计算机辅助设计与图形学学报, 2018, 30(3):468-476.[DOI:10.3724/SP.J.1089.2018.16435]
Zhou J Y, Zhao Y M. Application of convolution neural network in image classification and object detection[J]. Computer Engineering and Applications, 2017, 53(13):34-41.
周俊宇, 赵艳明.卷积神经网络在图像分类和目标检测应用综述[J].计算机工程与应用, 2017, 53(13):34-41.[DOI:10.3778/j.issn.1002-8331.1703-0362]
Zhang J, Wang X L, Yang J G. Shape modeling method based on deep learning[J]. Chinese Journal of Computers, 2018, 41(1):132-144.
张娟, 汪西莉, 杨建功.基于深度学习的形状建模方法[J].计算机学报, 2018, 41(1):132-144.[DOI:10.11897/SP.J.1016.2018.00132.]
Hare S, Golodetz S, Saffari A, et al. Struck:structured output tracking with kernels[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10):2096-2109.[DOI:10.1109/TPAMI.2015.2509974]
相关作者
相关机构
京公网安备11010802024621