高级检索

    肖家麟, 李钰, 袁晴龙, 唐志祺. 基于恒虚警率的深度神经网络Dropout正则化方法[J]. 华东理工大学学报(自然科学版), 2022, 48(1): 87-98. DOI: 10.14135/j.cnki.1006-3080.20201127005
    引用本文: 肖家麟, 李钰, 袁晴龙, 唐志祺. 基于恒虚警率的深度神经网络Dropout正则化方法[J]. 华东理工大学学报(自然科学版), 2022, 48(1): 87-98. DOI: 10.14135/j.cnki.1006-3080.20201127005
    XIAO Jialin, LI Yu, YUAN Qinglong, TANG Zhiqi. Dropout Regularization Method of Convolutional Neural Network Based on Constant False Alarm Rate[J]. Journal of East China University of Science and Technology, 2022, 48(1): 87-98. DOI: 10.14135/j.cnki.1006-3080.20201127005
    Citation: XIAO Jialin, LI Yu, YUAN Qinglong, TANG Zhiqi. Dropout Regularization Method of Convolutional Neural Network Based on Constant False Alarm Rate[J]. Journal of East China University of Science and Technology, 2022, 48(1): 87-98. DOI: 10.14135/j.cnki.1006-3080.20201127005

    基于恒虚警率的深度神经网络Dropout正则化方法

    Dropout Regularization Method of Convolutional Neural Network Based on Constant False Alarm Rate

    • 摘要: 为进一步提高深度神经网络算法在嵌入式机器人系统中的物体识别性能,提出了一种基于恒虚警率检测的深度神经网络Dropout正则化方法(CFAR-Dropout)。首先,通过对权重进行量化,将权重和激活从浮点数减少到二进制值;然后,设计了一个恒虚警检测器(CFAR),保持一定的虚警率,自适应地删减一些神经元节点,优化参与计算的神经元节点;最后,在嵌入式平台PYNQ-Z2上,使用基于VGG16的优化模型对算法的物体识别性能进行实验验证。实验结果表明,与使用经典的Dropout正则化方法相比,CFAR-Dropout正则化方法的错误率降低了约2%,有效防止了过拟合;与原本的网络结构相比,参数量所占内存减少到8%左右,有效防止了过参数化。

       

      Abstract: Compared with the traditional machine learning algorithms, deploying deep neural networks in embedded systems can significantly improve the performance of robot system object recognition. However, its performance is limited by the computing resources and memory capacity of the embedded platform. Hence, it is quite necessary to simplify the network structure and improve the system efficiency through model pruning and parameter quantification. Meanwhile, it is also necessary to prevent overfitting through dropout regularization and improve the accuracy of system recognition. In order to further improve the object recognition performance of the deep neural network algorithm in the embedded robot system, this paper proposes a deep neural network dropout regularization method based on constant false alarm detection (CFAR-Dropout). Firstly, by quantizing the weights, the weights and activations are reduced from floating point numbers to binary values. Secondly, a constant false alarm detector (CFAR) is designed to maintain a certain false alarm rate, adaptively delete some neuron nodes, and optimize the neuron nodes involved in the calculation. Finally, on the embedded platform PYNQ-Z2, an VGG16-based optimization model is used to experimentally verify the object recognition performance of the proposed algorithm. It is shown via experimental results that compared with the classic dropout regularization method, the CFAR-Dropout regularization method can reduce the error rate by about 2%, and effectively prevent the overfitting. Moreover, compared with the original network structure, the proposed algorithm can reduce the amount of the occupied memory by about 8%, and effectively prevent over-parameterization.

       

    /

    返回文章
    返回