道路结构特征下的车道线智能检测
Intelligent detection of lane based on road structure characteristics
- 2021年26卷第1期 页码:123-134
收稿:2020-07-31,
修回:2020-9-14,
录用:2020-9-21,
纸质出版:2021-01-16
DOI: 10.11834/jig.200431
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-07-31,
修回:2020-9-14,
录用:2020-9-21,
纸质出版:2021-01-16
移动端阅览
目的
2
在智能网联汽车系统开发中,复杂环境下的车道线检测是关键环节之一。目前的车道线检测算法大都基于颜色、灰度和边缘等视觉特征信息,检测准确度受环境影响较大。而车道线的长度、宽度及方向等特征的规律性较强,具有序列化和结构关联的特点,不易受到环境影响。为此,采用视觉信息与空间分布关系相结合的方案,来提高复杂环境下的车道线检测能力。
方法
2
首先针对鸟瞰图中车道线在横向和纵向上分布密度不同的特点,将目标检测算法YOLO v3(you only look once v3)的网格密度由
S
×
S
改进为
S
×2
S
,得到的YOLO v3(
S
×2
S
)更适于小尺寸、大宽高比物体的检测;然后利用车道线序列化和结构相互关联的特点,在双向循环门限单元(bidirectional gated recurrent unit,BGRU)的基础上,提出基于车道线分布关系的车道线检测模型(BGRU-Lane,BGRU-L)。最后利用基于置信度的D-S(Dempster-Shafer)算法融合YOLO v3(
S
×2
S
)和BGRU-L的检测结果,提高复杂场景下的车道线检测能力。
结果
2
采用融合了视觉信息和空间分布关系的车道线检测模型,在KITTI(Karlsruhe Institute of Technology and Toyoko Technological Institute)交通数据集上的平均精度均值达到了90.28%,在欧洲卡车模拟2常规场景(Euro Truck Simulator 2 convention,ETS2_conv)和欧洲卡车模拟2复杂场景(Euro Truck Simulator 2 complex,ETS2_complex)下的平均精度均值分别为92.49%和91.73%。
结论
2
通过增大YOLO v3纵向的网格密度,可显著提高模型检测小尺寸、大宽高比物体的准确度;序列化和结构关联是车道线的重要属性,基于空间分布关系的BGRU-L模型的准确度受环境影响较小。两种模型的检测结果在经过D-S融合后,在复杂场景下具有较高的准确度。
Objective
2
Intelligent connected vehicles are an important direction in intelligent transportation in China. In the development of intelligent networked vehicle systems
the detection of lane markings in complex environments is a key link. The safety of drug delivery
meal transport
and medical waste recovery can be guaranteed if the unmanned driving and intelligent network connected vehicle technology can be applied to epidemic prevention and control
especially in the epidemic of COVID-2019. The frequency of contact between medical staff and patients and the risk of cross infection of virus can be reduced. However
the current lane detection algorithms are mostly based on visual feature information
such as color
gray level
and edge. The accuracy of model detection is greatly affected by the environment. This condition makes the accuracy of existing lane detection algorithms difficult to meet the performance requirements of intelligent networked vehicles. The length
width
and direction of lanes have strong regularity
and they have the characteristics of serialization and structure association. These characteristics are not affected by visibility
weather
and obstacles. Vision-based lane detection method has high accuracy in scenes with high definition and without obstacles. For this reason
a lane detection model based on vision and spatial distribution is proposed to eliminate the influence of environment on lane detection. Our research can provide accurate lane information for the development of intelligent driving system.
Method
2
When a traffic image set is transformed into a bird's eye view
its original scale changes
and the lane interval is short. The you only look once v3 (YOLO v3) algorithm has significant advantages in speed and accuracy of detecting small objects. Thus
it is used as lane detector in this study. However
the distribution dens
ity of lane in the longitudinal direction is greater than that in the horizontal direction. The network structure of YOLO v3 is improved by increasing the vertical detection density to reduce the influence of the change in aspect ratio on target detection. The image is divided into
S
×2
S
grids during lane detection
and the obtained YOLO v3 (
S
×2
S
) is suitable for lane detection. However
the YOLO v3 (
S
×2
S
) lane detection algorithm ignores the spatial information of lane. In the case of poor light and vehicle occlusion
the accuracy of lane detection is poor. Bidirectional gated recurrent unit-lane
(BGRU-L)
a lane detection model based on lane distribution law
is proposed by considering that the spatial distribution of lane is unaffected by the environment. This model is used to improve the generalization ability of the lane detection model in complex scenes. This study combines visual information and spatial distribution relationship to avoid the large error of single lane detector and effectively reduce the uncertainty of the system. A confidence-based Dempster-Shafer (D-S) algorithm is used to fuse the detection results of YOLO v3 (
S
×2
S
) and BGRU-L detection results for guaranteeing the output of the optimal lane position.
Result
2
Karlsruhe Institute of Technology and Toyoko Technological Institute(KITTI) is a commonly used traffic dataset and includes scenes
such as sunny
cloudy
highway
and urban roads. The scenes are increased under complicated working conditions
such as rain
tunnel
and night
to ensure coverage. In this study
the scene in a game
Euro Truck Simulator 2 (ETS2)
is used as a supplement dataset. ETS2 is divided into two categories: conventional scene ETS2_conv (sunny
cloudy) and comprehensive scene ETS2_comp (sunny
cloudy
night
rain
and tunnel)
to accurately evaluate the effectiveness of the algorithm. On the KITTI
dataset
the accuracy of YOLO v3 (
S
×2
S
) detection is improved with the increase in detection grid density of YOLO v3
with mean average precision (mAP) of 88.39%. BGRU-L uses the spatial distribution relationship of the lane sequence to detect the location of lane
and the mAP is 76.14%. The reliability-based D-S algorithm is used to fuse the lane detection results of YOLO v3 (
S
×2
S
) and BGRU-L
and the final mAP of lane detection is raised to 90.28%. On the ETS2 dataset
the mAP values in the ETS2_conv (Euro Truck Simulator 2 convention
ETS2_conv) and ETS2_complex (Euro Truck Simulator 2 complex
ETS2_complex) scenarios are 92.49% and 91.73%
respectively
by using the lane detection model that combines visual information and spatial distribution relationships.
Conclusion
2
This study explores the detection schemes based on machine vision and the spatial distribution relationship of lane to address the difficulty in accurately detecting lanes in complex scenes. On the basis of the characteristics of inconsistent distribution density of lane in bird's eye view
the obtained model
YOLO v3 (
S
×2
S
)
is suitable for the detection of small-size and large aspect ratio targets by improving the grid density of YOLO v3 model. Experimental results show that the YOLO v3 (
S
×2
S
) is significantly higher than YOLO v3 in terms of lane detection accuracy. The lane detection model based on visual information has certain limitations and cannot achieve high-precision detection requirements in complex scenes. However
the length
width
and direction of lane have strong regularity and has the characteristics of serialization and structural correlation. BGRU-L
a lane prediction model based on the spatial distribution of lane
is unaffected by the environment and has strong generalization ability in rain
night
tunnel
and other scenarios. This study uses the D-S algorith
m based on confidence to fuse the detection results of YOLO v3 (
S
×2
S
) and BGRU-L to avoid the large errors that may exist in the single lane detection model and effectively reduce the uncertainty of the system. The results of lane detection in complex scenes can meet the requirements of intelligent vehicles.
Banerjee I, Ling Y, Chen M C, Hasan S A, Langlotz C P, Moradzadeh N, Chapman B, Amrhein T, Mong D, Rubin D L, Farri O and Lungren M P. 2019. Comparative effectiveness of convolutional neural network (CNN) and recurrent neural network (RNN) architectures for radiology text report classification. Artificial Intelligence in Medicine, 97:79-88[DOI:10.1016/j.artmed.2018.11.004]
Chen W W, Hu Z G, Wang H B, Wei Z Y and Xie Y H. 2018. Study on extension decision and artificial potential field based lane departure assistance system. Journal of Mechanical Engineering, 54(16):134-143
陈无畏, 胡振国, 汪洪波, 魏振亚, 谢有浩. 2018.基于可拓决策和人工势场法的车道偏离辅助系统研究.机械工程学报, 54(16):134-143)[DOI:10.3901/JME.2018.16.134]
Cho K, van Merrienboer B, Gulcehre C, Bahdanau D, Bougares F,Schwenk H and Bengio Y. 2017. Learning phrase representations using RNN encoder-decoder for statistical machine translation[EB/OL ] .[2020-06-30 ] . https://arxiv.org/pdf/1406.1078.pdf https://arxiv.org/pdf/1406.1078.pdf
Girshick R. 2015. Fast R-CNN//Proceedings of 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, USA: IEEE: 1440-1448[ DOI:10.1109/ICCV.2015.169 http://dx.doi.org/10.1109/ICCV.2015.169 ]
Guo L, Wang J Q and Li K Q. 2007. Lane detection based on points set optimization and disturbance set Fuzzying. China Mechanical Engineering, 18(15):1872-1876
郭磊, 王建强, 李克强. 2007.基于点集优化和干扰点模糊化的车道线识别.中国机械工程, 18(15):1872-1876)[DOI:10.3321/j.issn:1004-132x.2007.15.028]
Gupta A and Choudhary A. 2018. A framework for camera-based real-time lane and road surface marking detection and recognition. IEEE Transactions on Intelligent Vehicles, 3(4):476-485[DOI:10.1109/TIV.2018.2873902]
He B, Ai R, Yan Y and Lang X P. 2016. Accurate and robust lane detection based on dual-view convolutional neutral network//Proceedings of 2016 IEEE Intelligent Vehicles Symposium (IV). Gothenburg, Sweden: IEEE: 1041-1046[ DOI:10.1109/IVS.2016.7535517 http://dx.doi.org/10.1109/IVS.2016.7535517 ]
Hillel A B, Lerner R, Levi D and Raz G. 2014. Recent progress in road and lane detection:a survey. Machine Vision and Applications, 25(3):727-745[DOI:10.1007/s00138-011-0404-2]
Kim J and Lee M. 2014. Robust lane detection based on convolutional neural network and random sample consensus//Proceedings of the 21st International Conference on Neural Informatio n Processing. Kuching, Malaysia: Springer: 454-461[ DOI:10.1007/978-3-319-12637-1_57 http://dx.doi.org/10.1007/978-3-319-12637-1_57 ]
Kingma D P and Ba J. 2014. Adam: a method for stochastic optimization[EB/OL ] .[2020-06-30 ] . https://arxiv.org/pdf/1412.6980.pdf https://arxiv.org/pdf/1412.6980.pdf
Lee S, Kim J, Shin Yoon J, Shin S, Bailo O, Kim N, Lee T H, Hong H S, Han S H and So Kweon I. 2017. VPGNet: vanishing point guided network for lane and road marking detection and recognition//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 1947-1955[ DOI:10.1109/ICCV.2017.215 http://dx.doi.org/10.1109/ICCV.2017.215 ]
Li J, Mei X, Prokhorov D and Tao D C. 2017. Deep neural network for structural prediction and lane detection in traffic scene. IEEE Transactions on Neural Networks and Learning Systems, 28(3):690-703[DOI:10.1109/TNNLS.2016.2522428]
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C Y and Berg A C. 2016. SSD: Single shot MultiBox detector//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer: 21-37[ DOI:10.1007/978-3-319-46448-0_2 http://dx.doi.org/10.1007/978-3-319-46448-0_2 ]
Ma C, Yang CS, Yang F, Zhuang Y Q, Zhang Z W, Jia H Z and Xie X D. 2018. Trajectory factory: trackletcleaving and re-connection by deep siamese Bi-GRU for multiple object tracking//Proceedings of 2018 IEEE International Conference on Multimedia and Expo (ICME). San Diego, USA: IEEE: 1-6[ DOI:10.1109/ICME.2018.8486454 http://dx.doi.org/10.1109/ICME.2018.8486454 ]
Manoharan K and Daniel P. 2019. Robust lane detection in hilly shadow roads using hybrid color feature//Proceedings of the 9th Annual Information Technology, Electromechanical Engineering and Microelectronics Conference (IEMECON). Jaipur, India: IEEE: 201-204[ DOI:10.1109/IEMECONX.2019.8877068 http://dx.doi.org/10.1109/IEMECONX.2019.8877068 ]
Redmon J, Divvala S, Girshick R and Farhadi A. 2016. You only look once: Unified, real-time object detection//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 779-788[ DOI:10.1109/CVPR.2016.91 http://dx.doi.org/10.1109/CVPR.2016.91 ]
Redmon J and Farhadi A. 2017. YOLO9000: better, faster, stronger//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 7263-7271[ DOI:10.1109/CVPR.2017.690 http://dx.doi.org/10.1109/CVPR.2017.690 ]
Redmon J and Farhadi A. 2018. YOLOv3: an incremental improvement[EB/OL ] .[2020-06-30 ] . https://arxiv.org/pdf/1804.02767.pdf https://arxiv.org/pdf/1804.02767.pdf
Ren S Q, He K M, Girshick R and Sun J. 2017. Faster R-CNN:Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1137-1149[DOI:10.1109/TPAMI.2016.2577031]
Sivaraman S and Trivedi M M. 2013. Integrated lane and vehicle detection, localization, and tracking:a synergistic approach. IEEE Transactions on Intelligent Transportation Systems, 14(2):906-917[DOI:10.1109/TITS.2013.2246835]
Tang X L, Li S S, Wang H, Duan Z W, Li Y N and Zheng L. 2020. Research on energy control strategy based on hierarchical model predictive control in connected environment, Journal of Mechanical Engineering, 56(14):119-128
唐小林, 李珊珊, 王红, 段紫文, 李以农, 郑玲. 2020.网联环境下基于分层式模型预测控制的车队能量控制策略研究.机械工程学报, 56(14):119-128)[DOI:10.3901/JME.2020.14.119]
Tian Y, Gelernter J, Wang X, Chen W G, Gao J X, Zhang Y J and Li X L. 2018. Lane marking detection via deep convolutional neural network. Neurocomputing, 280:46-55[DOI:10.1016/j.neucom.2017.09.098]
Wang Z and He S X. 2004. An adaptive edge-detection method based on Canny algorithm. Journal of Image and Graphics, 9(8):957-962
王植, 贺赛先. 2004.一种基于Canny理论的自适应边缘检测方法.中国图象图形学报, 9(8):957-962)[DOI:10.3969/j.issn.1006-8961.2004.08.011]
Xiao J S, Cheng X, Li B J, Gao W and Peng H. 2015. Lane detection algorithm based on Beamlet transformation and K -means clustering. Journal of Sichuan University (Engineering Science Edition), 47(4):98-103
肖进胜, 程显, 李必军, 高威, 彭红. 2015.基于Beamlet和 K -means聚类的车道线识别.四川大学学报(工程科学版), 47(4):98-103)[DOI:10.15961/j.jsuese.2015.04.014 ]
Ye Y Y, Hao X L and Chen H J. 2018. Lane detection method based on lane structural analysis and CNNs. IET Intelligent Transport Systems, 12(6):513-520[DOI:10.1049/iet-its.2017.0143]
Yoo H, Yang U and Sohn K. 2013. Gradient-enhancing conversion for illumination-robust lane detection. IEEE Transactions on Intelligent Transportation Systems, 14(3):1083-1094[DOI:10.1109/TITS.2013.2252427]
Zhang X, Yang W, Tang X L and He Z H. 2018a. Estimation of the lateral distance between vehicle and lanes using convolutional neural network and vehicle dynamics. Applied Sciences, 8(12):#2508[DOI:10.3390/app8122508]
Zhang X, Yang W, Tang X L and Liu J. 2018b. A fast learning method for accurate and robust lane detection using two-stage feature extraction with YOLO v3. Sensors, 18(12):#4308[DOI:10.3390/s18124308]
Zhang X, Yang W, Tang X L and Wang Y. 2018c. Lateral distance detection model based on convolutional neural network. IET Intelligent Transport Systems, 13(1):31-39[DOI:10.1049/iet-its.2017.0431]
Zhao Z, Chen W H, Wu X M, Chen P C Y and Liu J M. 2017. LSTM network:a deep learning approach for short-term traffic forecast. IET Intelligent Transport Systems, 11(2):68-75[DOI:10.1049/iet-its.2016.0208]
Zou Q, Jiang H W, Dai Q Y, Yue Y H, Chen L and Wang Q. 2020. Robust lane detection from continuous driving scenes using deep neural networks. IEEE Transactions on Vehicular Technology, 69(1):41-54[DOI:10.1109/TVT.2019.2949603]
相关作者
相关机构
京公网安备11010802024621