高级检索

    魏江平, 林家骏, 陈宁. 多特征非接触式测谎技术[J]. 华东理工大学学报(自然科学版), 2020, 46(4): 556-563. DOI: 10.14135/j.cnki.1006-3080.20190619002
    引用本文: 魏江平, 林家骏, 陈宁. 多特征非接触式测谎技术[J]. 华东理工大学学报(自然科学版), 2020, 46(4): 556-563. DOI: 10.14135/j.cnki.1006-3080.20190619002
    WEI Jiangping, LIN Jiajun, CHEN Ning. Multi-feature Non-contact Deception Detection Technology[J]. Journal of East China University of Science and Technology, 2020, 46(4): 556-563. DOI: 10.14135/j.cnki.1006-3080.20190619002
    Citation: WEI Jiangping, LIN Jiajun, CHEN Ning. Multi-feature Non-contact Deception Detection Technology[J]. Journal of East China University of Science and Technology, 2020, 46(4): 556-563. DOI: 10.14135/j.cnki.1006-3080.20190619002

    多特征非接触式测谎技术

    Multi-feature Non-contact Deception Detection Technology

    • 摘要: 为提高测谎准确率,提出了一种基于多模态信息融合的测谎模型。在该模型的支持下,仅需对测谎者在说话期间的视频与音频信号进行处理即可有效完成测谎评估任务。心率可以反映测谎者的情绪变化,通过光电体积描记(Photo-Plethysmography,PPG)方法及全连接网络提取心率变化特征,通过3D卷积神经网络(3D-Convolutional Neural Networks,3D-CNN)及Word2Vec+CNN提取视频与语义特征,并将特征进行融合;使用线性支持向量机(Linear Support Vector Machines,L-SVM)对融合后的特征进行分类。在开源Real-life Trial数据集上的仿真实验结果表明,与其他多模态模型相比,本文提出的测谎模型在三模态下的准确率提升了2.74%。

       

      Abstract: The non-contact deception detection technology has a lot of significant applications in some areas such as judicial. In order to improve the accuracy of deception detection model, this paper proposes a multi-modal fusion based deception detection model, by which we can effectively complete the deception detection evaluation task by only processing the video and audio signals of the deception detector during speaking. The heart rate may reflect the emotional changes of the liar. We extract the feature of heart rate via the photo-plethysmography method and the fully connected network and extract the video and text features through 3D-convolutional neural networks and Word2Vec+CNN. All of these extracted features are merged. And then, we use the linear support vector machines to classify the fused features. The simulation experiments are carried out on the open source real-life trial dataset. Compared with the latest MLPH+C multi-modal model, the proposed deception detection model can increase the accuracy by 2.74%—23% in the three-mode. In order to evaluate whether the combination of different modal features could improve the performance of the deception detection model, we use the combination of features of each modality to conduct experiments. The accuracy of each category combination is over 70% and the accuracy of the combination of text and heart rate features is 96.89%. Especially, the combination of text, video and heart rate can obtain the highest accuracy, 98.88%, and the AUC values, 0.9883. The accuracy of three-mode prediction is better than the single-mode and dual-mode. The experimental results show that the proposed multi-modal model can effectively improve the correct rate of deception detection.

       

    /

    返回文章
    返回