高级检索

  • ISSN 1006-3080
  • CN 31-1691/TQ

多特征非接触式测谎技术

魏江平 林家骏 陈宁

魏江平, 林家骏, 陈宁. 多特征非接触式测谎技术[J]. 华东理工大学学报(自然科学版), 2020, 46(4): 556-563. doi: 10.14135/j.cnki.1006-3080.20190619002
引用本文: 魏江平, 林家骏, 陈宁. 多特征非接触式测谎技术[J]. 华东理工大学学报(自然科学版), 2020, 46(4): 556-563. doi: 10.14135/j.cnki.1006-3080.20190619002
WEI Jiangping, LIN Jiajun, CHEN Ning. Multi-feature Non-contact Deception Detection Technology[J]. Journal of East China University of Science and Technology, 2020, 46(4): 556-563. doi: 10.14135/j.cnki.1006-3080.20190619002
Citation: WEI Jiangping, LIN Jiajun, CHEN Ning. Multi-feature Non-contact Deception Detection Technology[J]. Journal of East China University of Science and Technology, 2020, 46(4): 556-563. doi: 10.14135/j.cnki.1006-3080.20190619002

多特征非接触式测谎技术

doi: 10.14135/j.cnki.1006-3080.20190619002
基金项目: 国家自然科学基金(61771196)
详细信息
    作者简介:

    魏江平(1995-),女,四川人,硕士生,主要研究方向为机器学习和测谎技术研究。E-mail:18701729045@163.com

    通讯作者:

    林家骏,E-mail:jjlin@ecust.edu.cn

  • 中图分类号: TP391

Multi-feature Non-contact Deception Detection Technology

  • 摘要: 为提高测谎准确率,提出了一种基于多模态信息融合的测谎模型。在该模型的支持下,仅需对测谎者在说话期间的视频与音频信号进行处理即可有效完成测谎评估任务。心率可以反映测谎者的情绪变化,通过光电体积描记(Photo-Plethysmography,PPG)方法及全连接网络提取心率变化特征,通过3D卷积神经网络(3D-Convolutional Neural Networks,3D-CNN)及Word2Vec+CNN提取视频与语义特征,并将特征进行融合;使用线性支持向量机(Linear Support Vector Machines,L-SVM)对融合后的特征进行分类。在开源Real-life Trial数据集上的仿真实验结果表明,与其他多模态模型相比,本文提出的测谎模型在三模态下的准确率提升了2.74%。

     

  • 图  1  3D-CNN网络结构

    Figure  1.  Network structure of 3D-CNN

    图  2  Word2Vec+CNN网络结构

    Figure  2.  Network structure of Word2Vec+CNN

    图  3  多模态模型框架图

    Figure  3.  Framework of multi-modal model

    图  4  不同模态特征组合的ROC图

    Figure  4.  ROC diagram of combination of different modal features

    表  1  不同分类模型对测试集的客观评价指标对比

    Table  1.   Comparison of objective evaluation indicators of different classification models on test sets

    ModelAccuracy/%
    CNN84.80
    L-SVMH+C98.88
    下载: 导出CSV

    表  2  留一法的客观评价指标对比

    Table  2.   Comparison of objective evaluation indicators of leave-one-out cross-validation

    ModelAccuracy/%
    DT[7]75.2
    Heart rate+Video(L-SVMC)81.32
    Heart rate+Video(L-SVMH+C)82.23
    Heart rate+Text(L-SVMC)96.11
    Heart rate+Text(L-SVMH+C)97.19
    Heart rate+Text+Video(L-SVMH+C)98.51
    下载: 导出CSV

    表  3  随机抽取方式的客观评价指标对比

    Table  3.   Comparison of objective evaluation indicators of random extraction method

    ModelAccuracy/%
    DEV[24]84.16
    Heart rate+Video(L-SVMC)78.10
    Heart rate+Video(L-SVMH+C)83.70
    Heart rate+Text(L-SVMC)96.80
    Heart rate+Text(L-SVMH+C)97.70
    Heart rate+Text+Video(L-SVMH+C)98.30
    下载: 导出CSV

    表  4  10折交叉验证方式的客观评价指标对比

    Table  4.   Comparison of objective evaluation indicators of ten-fold cross-validation method

    ModelAccuracy/%AUC
    LR[19]-0.922 1
    MLP (Static)[23]90.990.934 8
    MLP](Non-static)[23]96.140.979 9
    Heart rate+Video(L-SVMC)71.220.716 0
    Heart rate+Video(L-SVMH+C)73.460.730 0
    Heart rate+Text(L-SVMC)95.570.955 6
    Heart rate+Text(L-SVMH+C)96.890.968 3
    Heart rate+Text+Video(L-SVMH+C)98.880.988 3
    下载: 导出CSV
  • [1] FITZPATRICK E, BACHENKO J, FORNACIARI T. Automatic Detection of Verbal Deception[M]. USA: Morgan & Claypool Publisher, 2015.
    [2] PENNEBAKER J W, FRANCIS M E, BOOTH R J. Lin-guis tic inquiry and word count: LIWC 2001[EB/OL]. scholar. google. cn, 2001-03-28 [2019-09-04]. http://www.depts.ttu.edu/psy/lusi/files/LIWCmanual.pdf.
    [3] NEWMAN M L, PENNEBAKER J W, BERRY D S, et al. Lying words: Predicting deception from linguistic styles[J]. Personality and Social Psychology Bulletin, 2003, 29(5): 665-675. doi: 10.1177/0146167203029005010
    [4] ZHOU L, BURGOON J K, NUNAMAKER J F, et al. Automating linguistics-based cues for detecting deception in text-based asynchronous computer-mediated communications[J]. Group Decision and Negotiation, 2004, 13(1): 81-106. doi: 10.1023/B:GRUP.0000011944.62889.6f
    [5] MIHALCEA R, PULMAN S. Linguistic ethnography: Identifying dominant word classes in text[C]//International Conference on Intelligent Text Processing and Computational Linguistics. Berlin, Heidelberg: Springer, 2009: 594-602.
    [6] YANCHEVA M, RUDZICZ F. Automatic detection of deception in child-produced speech using syntactic complexity features[C]//Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Bulgaria: Association for Computational Linguistics, 2013: 944-953.
    [7] PÉREZ-ROSAS V, ABOUELENIEN M, MIHALCEA R, et al. Deception detection using real-life trial data[C]//Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. USA: ACM, 2015: 59-66.
    [8] ABRAMS S. The Complete Polygraph Handbook[M]. England: Lexington Books/DC Heath and Com, 1989.
    [9] ABOUELENIEN M, PÉREZ-ROSAS V, MIHALCEA R, et al. Deception detection using a multimodal approach[C]//Proceedings of the 16th International Conference on Multimodal Interaction. USA: ACM, 2014: 58-65.
    [10] PAVLIDIS I, EBERHARDT N L, LEVINE J A. Human behaviour: Seeing through the face of deception[J]. Nature, 2002, 415(6867): 35.
    [11] POLLINA D A, DOLLINS A B, SENTER S M, et al. Facial skin surface temperature changes during a “concealed information” test[J]. Annals of Biomedical Engineering, 2006, 34(7): 1182-1189. doi: 10.1007/s10439-006-9143-3
    [12] SIMPSON J R. Functional MRI lie detection: Too good to be true?[J]. The Journal of the American Academy of Psychiatry and the Law, 2008, 36(4): 491-498.
    [13] APPLE W, STREETER L A, KRAUSS R M. Effects of pitch and speech rate on personal attributions[J]. Journal of Personality and Social Psychology, 1979, 37(5): 715-727. doi: 10.1037/0022-3514.37.5.715
    [14] GRACIARENA M, SHRIBERG E, STOLCKE A, et al. Combining prosodic lexical and cepstral systems for deceptive speech detection[C]//2006 IEEE International Conference on Acoustics Speech and Signal Processing. France: IEEE, 2006: 1033-1036.
    [15] BENUS S, ENOS F, HIRSCHBERG J, et al. Pauses in deceptive speech[EB/OL]. scholar. google. cn, 2013-06-28 [2019-09-04]. https://academiccommons.columbia.edu/doi/10.7916/D8SQ97TG.
    [16] KIRCHHUEBEL C. The acoustic and temporal characteristics of deceptive speech[D]. UK: University of York, 2013.
    [17] DEPAULO B M, LINDSAY J J, MALONE B E, et al. Cues to deception[J]. Psychological Bulletin, 2003, 129(1): 74-118. doi: 10.1037/0033-2909.129.1.74-118
    [18] EKMAN P. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage[M]. USA: WW Norton & Company, 2009.
    [19] WU Z, SINGH B, DAVIS L S, et al. Deception detection in videos[EB/OL]. scholar. google. cn, 2018-04-25 [2019-09-04]. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPaper/16926.
    [20] CASO L, MARICCHIOLO F, BONAIUTO M, et al. The impact of deception and suspicion on different hand movements[J]. Journal of Nonverbal Behavior, 2006, 30(1): 1-19. doi: 10.1007/s10919-005-0001-z
    [21] COHEN D, BEATTIE G, SHOVELTON H. Nonverbal indicators of deception: How iconic gestures reveal thoughts that cannot be suppressed[J]. Semiotica, 2010, 182: 133-174.
    [22] LEVITAN S I, AN G, MA M, et al. Combining acoustic-prosodic, lexical, and phonotactic features for automatic deception detection [EB/OL]. scholar. google. cn, 2016-09-08 [2019-09-04]. http://dx.doi.org/10.21437/Interspeech.2016-1519.
    [23] KRISHNAMURTHY G, MAJUMDER N, PORIA S, et al. A deep learning approach for multimodal deception detection[EB/OL].arxiv.org, 2018-03-01[2019-06-10]. https://arxiv.org/abs/1803.00344.
    [24] KARIMI H, TANG J, LI Y. Toward end-to-end deception detection in videos[C]//2018 IEEE International Conference on Big Data (Big Data). USA: IEEE, 2018: 1278-1283.
    [25] POH M Z, MCDUFF D J, PICARD R W. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation[J]. Optics Express, 2010, 18(10): 10762-10774. doi: 10.1364/OE.18.010762
    [26] VERKRUYSSE W, SVAASAND L O, NELSON J S. Remote plethysmographic imaging using ambient light[J]. Optics Express, 2008, 16(26): 21434-21445. doi: 10.1364/OE.16.021434
    [27] ABOUELENIEN M, MIHALCEA R, BURZO M. Analyzing thermal and visual clues of deception for a non-contact deception detection approach[C]//Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments. Greece: ACM, 2016: 1-4.
    [28] JI S, XU W, YANG M, et al. 3D convolutional neural networks for human action recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 221-231. doi: 10.1109/TPAMI.2012.59
  • 加载中
图(4) / 表(4)
计量
  • 文章访问数:  8008
  • HTML全文浏览量:  3039
  • PDF下载量:  64
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-06-19
  • 网络出版日期:  2019-10-11
  • 刊出日期:  2020-08-01

目录

    /

    返回文章
    返回