高级检索

    吴骏逸, 谷小婧, 顾幸生. 基于可见光/红外图像的夜间道路场景语义分割[J]. 华东理工大学学报(自然科学版), 2019, 45(2): 301-309. DOI: 10.14135/j.cnki.1006-3080.20180119001
    引用本文: 吴骏逸, 谷小婧, 顾幸生. 基于可见光/红外图像的夜间道路场景语义分割[J]. 华东理工大学学报(自然科学版), 2019, 45(2): 301-309. DOI: 10.14135/j.cnki.1006-3080.20180119001
    WU Junyi, GU Xiaojing, GU Xingsheng. Night Road Scene Semantic Segmentation Based on Visible and Infrared Thermal Images[J]. Journal of East China University of Science and Technology, 2019, 45(2): 301-309. DOI: 10.14135/j.cnki.1006-3080.20180119001
    Citation: WU Junyi, GU Xiaojing, GU Xingsheng. Night Road Scene Semantic Segmentation Based on Visible and Infrared Thermal Images[J]. Journal of East China University of Science and Technology, 2019, 45(2): 301-309. DOI: 10.14135/j.cnki.1006-3080.20180119001

    基于可见光/红外图像的夜间道路场景语义分割

    Night Road Scene Semantic Segmentation Based on Visible and Infrared Thermal Images

    • 摘要: 针对夜间道路场景解析困难的问题,提出了一种联合可见光与红外热像图实现夜间场景语义分割的方法。首先将双谱图像分别输入至两路并行的全卷积神经网络中,在网络的尾端融合特征并预测得到初步的语义分割结果。在此基础上,对双谱图像进行自适应直方图均衡及双边滤波,并利用基于双谱图像信息的稠密条件随机场对语义分割结果进行优化。实验结果表明,相比于单独使用可见光图、红外热像图、融合图,本文方法可以对夜间道路场景进行更准确的解析。

       

      Abstract: Nowadays, automatic driving and assisted driving techniques have been receiving more and more attentions. Due to the limitation of visible light sensor at night, it is difficult for vehicles to automatically observe the road conditions and correctly understand the driving environment. To address these problems, this paper proposes a method to segment night scene semantically by combining the visible and infrared thermal images’ information. Firstly, the dual-band images are input into the parallel fully convolutional network, and the semantic segmentation is realized effectively by combing the two-way network’s information at the end of the network. Besides, we implement adaptive histogram equalization and bilateral filtering on the dual-band images and improve the final segmentation result by a dense condition random field. In order to verify the proposed algorithm, we collect and annotate 541 pairs of visible and infrared thermal images in urban road sence, and 200 pairs of visible and infrared thermal images in campus road scene via KAIST all-day visual place recognition database. Experiment results show that the semantic segmentation method based on visible image may be infected by headlights and has a relatively low segmentation accuracy on persons and cars. The method based on thermal infarad image is good at segment heat source objects, but it cannot recognize the objects via textures or colors. The method based on color fusion image considers the characteristics of visible and thermal infarad images and gets some improvement. Futhermore, parallel fully convolutional network can more accurately resolve the night road scene than using only visible images, infrared thermal images or fusion images. The image enhancement processing can further improve the optimization of densecondition random field on seriously under-lit road scene images.

       

    /

    返回文章
    返回