高级检索

  • ISSN 1006-3080
  • CN 31-1691/TQ

基于红外气体成像及实例分割的气体泄漏检测方法

谷小婧 林昊琪 丁德武 顾幸生

谷小婧, 林昊琪, 丁德武, 顾幸生. 基于红外气体成像及实例分割的气体泄漏检测方法[J]. 华东理工大学学报(自然科学版). doi: 10.14135/j.cnki.1006-3080.20210719001
引用本文: 谷小婧, 林昊琪, 丁德武, 顾幸生. 基于红外气体成像及实例分割的气体泄漏检测方法[J]. 华东理工大学学报(自然科学版). doi: 10.14135/j.cnki.1006-3080.20210719001
GU Xiaojing, LIN Haoqi, DING Dewu, GU Xingsheng. An Infrared Gas Imaging and Instance Segmentation Based Gas Leakage Detection Method[J]. Journal of East China University of Science and Technology. doi: 10.14135/j.cnki.1006-3080.20210719001
Citation: GU Xiaojing, LIN Haoqi, DING Dewu, GU Xingsheng. An Infrared Gas Imaging and Instance Segmentation Based Gas Leakage Detection Method[J]. Journal of East China University of Science and Technology. doi: 10.14135/j.cnki.1006-3080.20210719001

基于红外气体成像及实例分割的气体泄漏检测方法

doi: 10.14135/j.cnki.1006-3080.20210719001
基金项目: 国家自然科学基金(61973122)
详细信息
    作者简介:

    谷小婧(1983—),女,博士,副教授,主要研究方向为多模态机器视觉、机器学习和气候变化应对。E-mail: xjing.gu@ecust.edu.cn

  • 中图分类号: TP391

An Infrared Gas Imaging and Instance Segmentation Based Gas Leakage Detection Method

  • 摘要: 针对红外气体成像实现自动化的泄漏检测,提出了一种基于深度网络的气羽实例分割方法,可以同时实现泄漏检测、气羽分割以及多泄露源的区分。不同于现有的实例分割方法,针对泄漏气羽各向异性的空间特征,采用二维高斯模型下的概率函数作为嵌入空间的相似性度量,提出了一种新的聚类损失函数。该损失函数通过聚拢实例内的像素点,同时学习一个斜椭圆形的带宽来最大化每个气羽实例分割掩膜。为了获得更多的红外气体成像数据以及避免人工标注气羽轮廓的困难,提出了一种使用合成红外气体成像数据集来训练模型的方法。实验结果表明,经过合成数据的训练,本文方法可以成功地在真实的红外视频上进行自动泄漏检测。与其他先进的实例分割方法相比,本文方法在保持高精度的同时具有更快的处理速度,适用于实时泄漏检测场景。

     

  • 图  1  本文方法的整体框架

    Figure  1.  The overall framework of our method

    图  2  嵌入空间的像素分布

    Figure  2.  Pixel distribution of the embedded space

    图  3  泄漏气羽合成数据集

    Figure  3.  Synthesized gas leakage dataset

    图  4  各种方法的性能比较

    Figure  4.  Performance comparison of various methods

    图  5  合成数据集上的实例分割结果

    Figure  5.  Results of instance segmentation on synthetic data

    图  6  真实有泄露场景视频1的实例分割结果

    Figure  6.  Instance segmentation results of real infrared video 1 with gas leakage in scene

    图  7  真实有泄露场景视频2的实例分割结果

    Figure  7.  Instance segmentation results of real infrared video 2 with gas leakage in scene

    图  8  真实无气体泄露场景视频3的实例分割结果

    Figure  8.  Instance segmentation results of real infrared video 3 without gas leakage in scene

    图  9  真实无气体泄露场景视频4的实例分割结果

    Figure  9.  Instance segmentation results of real infrared video 4 without gas leakage in scene

    表  1  多种实例分割方法性能比较

    Table  1.   Performance comparison of instance segmentation methods

    MethodAPAP50AP75FPS
    Mask R-CNN0.7570.9690.8512.2
    MS R-CNN0.7680.9690.857<2
    Discriminative Loss0.2240.4900.1855
    Harmonic0.4410.7000.473<1
    Spatial Embeddings0.6730.9230.76311
    This paper0.7420.9590.84211
    下载: 导出CSV

    表  2  真实有气体泄露视频上的准确率

    Table  2.   Accuracy on the real scene with gas leakage

    Video(have gas)FramesAcc
    This paperDiscriminative LossSpatial EmbeddingHarmonicMask R-CNNMS R-CNN
    12400.930.560.830.850.870.92
    2450.960.2100.210.750.77
    Mean Acc0.9450.3850.4150.530.810.845
    下载: 导出CSV

    表  3  真实无气体泄露视频上的虚警率

    Table  3.   False Alarms on the real scene without gas leakage

    Video (no gas)framesFA
    This paperDiscriminative LossSpatial EmbeddingHarmonicMask R-CNNMS R-CNN
    319200.560.660.8800
    421700.220.200.780.170.13
    Average FA00.390.430.830.0850.065
    下载: 导出CSV

    表  4  嵌入空间采用不同分布建模的实例分割性能

    Table  4.   Performance comparison of instance segmentation results based on different distributions for embedding feature modeling.

    ModelAPAP50AP75
    One-dimensional Gaussian(Same variance)0.6730.9230.763
    One-dimensional Gaussian(Different variance)0.6220.8010.702
    Two-dimensional Gaussian0.7420.9590.842
    下载: 导出CSV

    表  5  固定与可学习的$ {\boldsymbol{\sigma}_k} $的对比

    Table  5.   Comparison fixed and learnable $ \sigma_k $

    ModelAPAP50AP75
    Learnable $ {\boldsymbol{\sigma}_k} $0.7420.9590.842
    Fixed $ {\boldsymbol{\sigma}_k} $0.7260.9580.826
    下载: 导出CSV

    表  6  固定与可学习的中心的对比

    Table  6.   Comparison fixed and learnable center

    ModelAPAP50AP75
    Learnable center0.7420.9590.842
    Fixed center0.7130.9460.804
    下载: 导出CSV
  • [1] LIANG X, CHEN X, ZHANG J, et al. Reactivity-based industrial volatile organic compounds emission inventory and its implications for ozone control strategies in China[J]. Atmospheric Environment, 2017, 162: 115-126. doi: 10.1016/j.atmosenv.2017.04.036
    [2] ALVAREZ R A, ZAVALA-ARAIZA D, LYON D R, et al. Assessment of methane emissions from the US oil and gas supply chain[J]. Science, 2018, 361(6398): 186-194.
    [3] MOREIRA SCAFUTTO R D P, DE SOUZA FILHO C R. Detection of heavy hydrocarbon plumes (Ethane, propane and Butane) using airborne longwave (7.6-13.5 mu m) infrared hyperspectral data[J]. Fuel, 2019, 242: 863-870. doi: 10.1016/j.fuel.2018.12.127
    [4] RAVIKUMAR A P, WANG J, MCGUIRE M, et al. "Good versus good enough"? empirical tests of methane leak detection sensitivity of a commercial infrared camera[J]. Environmental Science & Technology, 2018, 52(4): 2368-2374.
    [5] STRAHL T, HERBST J, LAMBRECHT A, et al. Methane leak detection by tunable laser spectroscopy and mid-infrared imaging[J]. Applied Optics, 2021, 60(15): C68-C75. doi: 10.1364/AO.419942
    [6] 丁德武, 申屠灵女, 邹兵, 等. 红外热成像技术在石化装置泄漏隐患检测中的应用[J]. 安全、健康和环境, 2015, 15(12): 17-20. doi: 10.3969/j.issn.1672-7932.2015.12.005
    [7] WANG J, TCHAPMI L P, RAVIKUMAR A P, et al. Machine vision for natural gas methane emissions detection using an infrared camera[J]. Appl Energy, 2020, 257: 113998. doi: 10.1016/j.apenergy.2019.113998
    [8] BADAWI D, PAN H, CETIN S C, et al. Computationally efficient spatio-temporal dynamic texture recognition for volatile organic compound (VOC) leakage detection in industrial plants[J]. IEEE Journal of Selected Topics in Signal Processing, 2020, 14(4): 676-687. doi: 10.1109/JSTSP.2020.2976555
    [9] SHI J, CHANG Y, XU C, et al. Real-time leak detection using an infrared camera and Faster R-CNN technique[J]. Computers & Chemical Engineering, 2020, 135: 106780.
    [10] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. doi: 10.1109/TPAMI.2016.2577031
    [11] HE K, GKIOXARI G, DOLLAR P, et al. Mask R-CNN [C]//IEEE International Conference on Computer Vision (ICCV). USA: IEEE, 2017: 2980-2988.
    [12] HAO C, KUNYANG S, ZHI T, et al. BlendMask: Top-down meets bottom-up for instance segmentation [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). USA: IEEE, 2020: 8570-8578.
    [13] ZHI T, CHUNHUA S, HAO C. Conditional convolutions for instance segmentation [C]//European Conference on Computer Vision (ECCV). USA: Springer, 2020: 282-298.
    [14] HUANG Z, HUANG L, GONG Y, et al. Mask scoring R-CNN [C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). USA: IEEE, 2019: 6402-6411.
    [15] BRABANDERE B D, NEVEN D, GOOL L. Semantic instance segmentation for autonomous driving [C] // IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). USA: IEEE, 2017: 478-480.
    [16] KULIKOV V, LEMPITSKY V, IEEE. Instance segmentation of biological images using harmonic embeddings [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). USA: IEEE, 2020: 3842-3850.
    [17] NEVEN D, DE BRABANDERE B, PROESMANS M, et al. Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). USA: IEEE, 2019: 8829-37.
    [18] REZATOFIGHI H, TSOI N, JUNYOUNG G, et al. Generalized intersection over union: a metric and a loss for bounding box regression [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). USA: IEEE, 2019: 658-666.
    [19] JEONG J, CHO Y, SHIN Y-S, et al. Complex urban dataset with multi-level sensors from highly diverse urban environments[J]. International Journal of Robotics Research, 2019, 38(6): 642-657. doi: 10.1177/0278364919843996
    [20] GONG S, BOURENNANE E B, YANG X. MFNet: Multi-feature convolutional neural network for high-density crowd counting [C]//2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). Canada: IEEE, 2020: 384-390.
    [21] Akhloufi M A, Porcher C, Bendada A. Fusion of thermal infrared and visible spectrum for robust pedestrian tracking[C]//Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications XI. USA: International Society for Optics and Photonics, 2014: 907601-907608.
    [22] SHIVAKUMAR S S, RODRIGUES N, ZHOU A, et al. PST900: RGB-thermal calibration, dataset and segmentation network [C]//IEEE International Conference on Robotics and Automation (ICRA). Paris, France : IEEE, 2020: 9441-9447.
    [23] ROMERA E, ALVAREZ J M, BERGASA L M, et al. ERFNet: Efficient residual factorized ConvNet for real-time semantic segmentation[J]. IEEE Transactions on Intelligent Transportation Systems, 2018, 19(1): 263-272. doi: 10.1109/TITS.2017.2750080
    [24] EVERINGHAM M, VAN GOOL L, WILLIAMS C K I, et al. The pascal visual object classes (VOC) challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303-338. doi: 10.1007/s11263-009-0275-4
  • 加载中
图(9) / 表(6)
计量
  • 文章访问数:  101
  • HTML全文浏览量:  45
  • PDF下载量:  33
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-07-19
  • 录用日期:  2021-12-21
  • 网络出版日期:  2022-04-12

目录

    /

    返回文章
    返回