高级检索

  • ISSN 1006-3080
  • CN 31-1691/TQ

基于小样本深度学习的通风柜橱窗状态识别方法

马振伟 何高奇 袁玉波

马振伟, 何高奇, 袁玉波. 基于小样本深度学习的通风柜橱窗状态识别方法[J]. 华东理工大学学报(自然科学版), 2020, 46(3): 428-435. doi: 10.14135/j.cnki.1006-3080.20190412004
引用本文: 马振伟, 何高奇, 袁玉波. 基于小样本深度学习的通风柜橱窗状态识别方法[J]. 华东理工大学学报(自然科学版), 2020, 46(3): 428-435. doi: 10.14135/j.cnki.1006-3080.20190412004
MA Zhenwei, HE Gaoqi, YUAN Yubo. Fume Hood Window State Recognition Method Based on Few Shot Deep Learning[J]. Journal of East China University of Science and Technology, 2020, 46(3): 428-435. doi: 10.14135/j.cnki.1006-3080.20190412004
Citation: MA Zhenwei, HE Gaoqi, YUAN Yubo. Fume Hood Window State Recognition Method Based on Few Shot Deep Learning[J]. Journal of East China University of Science and Technology, 2020, 46(3): 428-435. doi: 10.14135/j.cnki.1006-3080.20190412004

基于小样本深度学习的通风柜橱窗状态识别方法

doi: 10.14135/j.cnki.1006-3080.20190412004
基金项目: 浙江大学CAD&CG国家重点实验室开放课题(A1913);上海市自然科学基金(19ZR1415800);上海科普教育发展基金会资助
详细信息
    作者简介:

    马振伟(1995-),男,上海人,硕士生,主要研究方向为计算机视觉。E-mail:960901625@qq.com

    通讯作者:

    袁玉波,E-mail:ybyuan@ecust.edu.cn

  • 中图分类号: TP391

Fume Hood Window State Recognition Method Based on Few Shot Deep Learning

  • 摘要: 当实验人员离开化学实验室时,未及时关闭通风柜橱窗会造成严重的安全隐患以及能源浪费,且目前缺乏有效的信息化管理手段。本文利用计算机视觉技术非接触性、可扩展性强的优势,提出了基于小样本深度学习的通风柜橱窗状态识别方法。首先对监控视频进行预处理,基于运动特征和几何先验提取出通风柜橱窗区域;然后对改进的多尺度空洞原型网络进行训练,准确识别出通风柜橱窗的状态。在实际应用中,结合改进的人员检测算法有效减少了识别次数。经实验验证,该方法的准确率较卷积神经网络提升了10.95%,并且对光照变化的鲁棒程度较高,可有效满足化学实验室的日常安全管理要求。

     

  • 图  1  通风柜橱窗安全管理平台系统架构

    Figure  1.  System architecture of fume hood safety management platform

    图  2  多尺度空洞原型网络架构

    Figure  2.  System framework of DProtoNet

    图  3  样本实例预测结果展示

    Figure  3.  Demonstration of sample prediction results

    图  4  不同指数因子的光照变换

    Figure  4.  Illumination transformation with different factors

    表  1  不同方法的准确率对比

    Table  1.   Accuracy of different methods

    AlgorithmAccuracy/%
    SVMRandom forest
    LBP51.3069.90
    PCA57.1064.76
    ColorHist75.9447.56
    HOG57.1082.12
    CNN88.34
    ProtoNet97.32
    DProtoNet99.29
    下载: 导出CSV

    表  2  光照变化下的准确率对比

    Table  2.   Accuracy under illumination changes

    AlgorithmAccuracy/%
    SVMRandom forest
    LBP50.9460.69
    PCA56.2950.77
    ColorHist50.2152.32
    HOG60.1972.56
    CNN77.25
    ProtoNet94.43
    DProtoNet95.74
    下载: 导出CSV

    表  3  不同空洞率组合下的准确率

    Table  3.   Accuracy under different dilation rate combination

    Dilation rateAccuracy/%
    1,298.25
    1,2,399.29
    1,2,3,498.90
    下载: 导出CSV
  • [1] 王燕, 王月荣, 熊焰, 等. 化学实验室安全管理体系的建设和实践[J]. 化学高等教育, 2018, 162(4): 75-78.
    [2] LI Z, GUO G, REN S. The detection system of an auto front left door electric window switch[C]//Proceedings of 2012 International Conference on Electronic Information and Electrical Engineering. China: Atlantis Press, 2012: 271-274.
    [3] 孙宾, 王茂森, 戴劲松, 等. 基于CAN总线的家用门窗自动开关控制系统[J]. 兵器装备工程学报, 2011, 32(3): 82-86. doi: 10.3969/j.issn.1006-0707.2011.03.027
    [4] 金晓磊, 潘鹏. 机器人视觉的电梯轿厢门状态识别系统[J]. 单片机与嵌入式系统应用, 2018, 18(4): 28-31.
    [5] 丁四海, 刘玉雪, 路林吉. 数字图像处理技术在电气控制柜开关状态识别中的应用[J]. 微型电脑应用, 2013, 30(5): 39-40. doi: 10.3969/j.issn.1007-757X.2013.05.012
    [6] DENG J, DONG W, SOCHER R, et al. ImageNet: A large-scale hierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition. USA: IEEE, 2009: 248-255.
    [7] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems. USA: IEEE, 2012: 1097-1105.
    [8] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. arXiv.org, 2014-09-04[2019-04-01], arXiv.org/abs/1409.1556.
    [9] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. USA: IEEE, 2015: 1-9.
    [10] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. USA: IEEE, 2016: 770-778.
    [11] ZEILER M D, TAYLOR G W, FERGUS R. Adaptive deconvolutional networks for mid and high level feature learning[C]//2011 International Conference on Computer Vision. Spain: IEEE, 2011: 2018-2025.
    [12] HOWARD A G, ZHU M, CHEN B, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications[EB/OL]. arXiv.org, 2017-04-17[2019-04-01]. arXiv/abs/1704.04861.
    [13] YU F, KOLTUN V. Multi-scale context aggregation by dilated convolutions[J]. arXiv.org, 2015-11-23[2019-04-01]. arXiv/abs/1511.07122.
    [14] KOCH G, ZEMEL R, SALAKHUTDIN R. Siamese neural networks for one-shot image recognition[C]//Proceedings of the International Conference on Machine Learning. France: ACM, 2015: 6-36.
    [15] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[C]//Proceedings of the Advances in Neural Information Processing Systems. Spain: MIT Press, 2016: 3630-3638.
    [16] SNELL J, SWERSKY K, ZEMEL R. Prototypical networks for few-shot learning[C]//Proceedings of the Advances in Neural Information Processing Systems. USA: MIT Press, 2017: 4077-4087.
    [17] AHAD M A R, TAN J K, KIM H, et al. Motion history image: Its variants and applications[J]. Machine Vision and Applications, 2012, 23(2): 255-281. doi: 10.1007/s00138-010-0298-4
    [18] SUZUKI S, BE K. Topological structural analysis of digitized binary images by border following[J]. Computer Vision Graphics and Image Processing, 1985, 30(1): 32-46. doi: 10.1016/0734-189X(85)90016-7
    [19] REDMON J, FARHADI A. YOLOv3: An incremental improvement[EB/OL]. arXiv.org, 2018-04-08[2019-04-01]. arXiv/abs/1804.02767.
  • 加载中
图(4) / 表(3)
计量
  • 文章访问数:  7601
  • HTML全文浏览量:  2930
  • PDF下载量:  40
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-04-12
  • 网络出版日期:  2019-10-15
  • 刊出日期:  2020-06-01

目录

    /

    返回文章
    返回