高级检索

    陈芮, 李飞, 王占全. Prefix-LSDPM:面向小样本的在线学习会话退出预测模型[J]. 华东理工大学学报(自然科学版), 2023, 49(5): 754-763. DOI: 10.14135/j.cnki.1006-3080.20230206003
    引用本文: 陈芮, 李飞, 王占全. Prefix-LSDPM:面向小样本的在线学习会话退出预测模型[J]. 华东理工大学学报(自然科学版), 2023, 49(5): 754-763. DOI: 10.14135/j.cnki.1006-3080.20230206003
    CHEN Rui, LI Fei, WANG Zhanquan. Prefix-LSDPM: A Few-Shot Oriented Online Learning Session Dropout Prediction Model[J]. Journal of East China University of Science and Technology, 2023, 49(5): 754-763. DOI: 10.14135/j.cnki.1006-3080.20230206003
    Citation: CHEN Rui, LI Fei, WANG Zhanquan. Prefix-LSDPM: A Few-Shot Oriented Online Learning Session Dropout Prediction Model[J]. Journal of East China University of Science and Technology, 2023, 49(5): 754-763. DOI: 10.14135/j.cnki.1006-3080.20230206003

    Prefix-LSDPM:面向小样本的在线学习会话退出预测模型

    Prefix-LSDPM: A Few-Shot Oriented Online Learning Session Dropout Prediction Model

    • 摘要: 在线学习会话退出预测旨在准确预测在线学习过程中的学习会话退出,是智慧教育领域中十分重要的一项研究任务。针对现有模型在小样本场景下预测准确率较低的问题,提出了基于前缀提示的在线学习会话退出预测模型Prefix-LSDPM。该模型为获取单个学习行为内部特征及连续学习行为之间的隐含关联信息,在改进了键值向量的Transformer网络中对提示形式的合成序列进行掩码学习;为降低模型训练涉及的参数量以适应小样本学习,将学习会话退出预测任务建模形式靠近预训练任务,并在冻结的预训练参数基础上对提示参数进行调优。基于多个数据集的实验结果表明,Prefix-LSDPM的预测准确率优于现有模型,且在小样本学习中仍能达到较好的预测效果。

       

      Abstract: Online learning session exit prediction aims to accurately predict learning session exit during the online learning process, which is a very important research task in the field of smart education. To address the issue of low prediction accuracy of existing models in small samples scenarios, a prefix-tuning based online learning session dropout prediction model, Prefix-LSDPM, is proposed. This model concatenates the prompt vector as a prefix with the sequence of learned behavioral features. It fixes the pre-trained parameter weights and only trains the iterative prefix hint parameters. In order to obtain the internal characteristics of individual learning behaviors of learners and the implicit correlation between continuous learning behaviors, this model performs mask learning on the synthesized sequences in the form of prompts in a Transformer network with improved key-value vectors. Experiments are conducted based on three pre-trained models (BERT, ALBERT, and UniLM) and three datasets (EdNet, XuetangX 1 and XuetangX 2) According to the ablation experiment, based on the parameters of three pre-training models, the length of the prompt sequence that can achieve the best predictive performance for Prefix-LSDPM is 3 tokens. The Prefix-LSDPM based on ALBERT has the best prediction performance. AUC can reach 90.65%, which is 9.29% higher than the existing best-performing model. Based on the optimal length of the prompt sequence, a comparison is made between fine-tuning and Prefix-LSDPM based on three pre-training models. The experimental results via multiple datasets show that the prediction accuracy of Prefix-LSDPM is superior to existing models, and the AUC is increased by 5.19%, compared with the fine-tuning method. In the small sample performance research experiment, the AUC trained with Prefix-LSDPM on 1% of the training set samples can still reach 86.95%. It is illustrated that Prefix-LSDPM can achieve advanced prediction performance in small sample learning.

       

    /

    返回文章
    返回