Abstract:
Nowadays, automatic driving and assisted driving techniques have been receiving more and more attentions. Due to the limitation of visible light sensor at night, it is difficult for vehicles to automatically observe the road conditions and correctly understand the driving environment. To address these problems, this paper proposes a method to segment night scene semantically by combining the visible and infrared thermal images’ information. Firstly, the dual-band images are input into the parallel fully convolutional network, and the semantic segmentation is realized effectively by combing the two-way network’s information at the end of the network. Besides, we implement adaptive histogram equalization and bilateral filtering on the dual-band images and improve the final segmentation result by a dense condition random field. In order to verify the proposed algorithm, we collect and annotate 541 pairs of visible and infrared thermal images in urban road sence, and 200 pairs of visible and infrared thermal images in campus road scene via KAIST all-day visual place recognition database. Experiment results show that the semantic segmentation method based on visible image may be infected by headlights and has a relatively low segmentation accuracy on persons and cars. The method based on thermal infarad image is good at segment heat source objects, but it cannot recognize the objects via textures or colors. The method based on color fusion image considers the characteristics of visible and thermal infarad images and gets some improvement. Futhermore, parallel fully convolutional network can more accurately resolve the night road scene than using only visible images, infrared thermal images or fusion images. The image enhancement processing can further improve the optimization of densecondition random field on seriously under-lit road scene images.