您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(工学版)》

山东大学学报 (工学版) ›› 2021, Vol. 51 ›› Issue (3): 15-21.doi: 10.6040/j.issn.1672-3961.0.2020.249

• • 上一篇    下一篇

基于知识蒸馏的自适应多领域情感分析

杨修远,彭韬,杨亮*,林鸿飞   

  1. 大连理工大学计算机科学与技术学院, 辽宁 大连 116023
  • 出版日期:2021-06-20 发布日期:2021-06-24
  • 作者简介:杨修远(1995— ),女,河南南阳人,硕士研究生,主要研究方向为情感分析. E-mail:liang@dlut.edu.cn. *通信作者简介:杨亮(1986— ),男,辽宁大连人,讲师,博士,主要研究方向为情感分析,意见挖掘. E-mail:815754134@qq.com
  • 基金资助:
    国家重点研发计划资助项目(2018YFC0830604);国家自然科学基金资助项目(61702080,61806038);中央高校基本科研业务费专项(DUT19RC(4)016);中国博士后基金资助项目(2018M631788)

Adaptive multi-domain sentiment analysis based on knowledge distillation

YANG Xiuyuan, PENG Tao, YANG Liang*, LIN Hongfei   

  1. College of Computer Science and Technology, Dalian University of Technology, Dalian 116023, Liaoning, China
  • Online:2021-06-20 Published:2021-06-24

摘要: 提出一种自适应多领域知识蒸馏框架,有效地加速推理和减小模型参数同时确保模型性能,采用知识蒸馏方法对情感分析问题进行研究。针对每个特定领域进行知识蒸馏,模型蒸馏涉及词嵌入层蒸馏、编码层蒸馏(注意力蒸馏、隐藏状态蒸馏)、输出预测层蒸馏等多个方面;针对不同领域,学生模型保持相同的编码器,即共享权重,通过不同的领域特定输出层拟合不同的教师模型。在多个公开数据集上的试验结果表明,单领域知识蒸馏使得模型准确度平均提升2.39%,多领域知识蒸馏使得模型准确度平均提升0.5%。与单领域的知识蒸馏相比,该框架增强了学生模型的泛化能力,提升了性能。

关键词: 知识蒸馏, 自适应, 多领域, 情感分析, 深度学习

Abstract: An adaptive multi-domain knowledge distillation framework was proposed, which effectively accelerated reasoning and reduced model parameters while ensuring model performance. The knowledge distillation method was used to study sentiment analysis problems. When performing knowledge distillation for each specific field, model distillation involved word embedding layer distillation, coding layer distillation(attention distillation, hidden state distillation), output prediction layer distillation and other aspects of distillation, in order to learn all aspects knowledge from the specific field teacher model. Selectively learning the importance of the teacher model corresponding to different fields to the data was proposed, which further improved the accuracy of the prediction results. The experimental results on multiple public datasets showed that after single-domain knowledge distillation increased the model accuracy by an average of 2.39%, while multi-domain knowledge distillation increased the model accuracy by an average of 0.5%. Compared with the knowledge distillation of a single domain, this framework enhanced the generalization ability of the student model and improved the performance.

Key words: knowledge distillation, adaptive, multi-domain, sentiment analysis, deep learning

中图分类号: 

  • TP391
[1] MATTHEW E, MARK N, MOHIT I, et al. Deep contextualized word representations[C] // Proceedings of NAACL-HLT. Stroudsburg, USA: Association for Computational Linguistics, 2018: 2227-2237.
[2] DEVLIN J, CHANG M, LEE K, et al. Bert: pre-training of deep bidirectional transformers for language understanding[C] //Proceedings of NAACL-HLT. Stroudsburg, USA: Association for Computational Linguistics, 2019: 4171-4186.
[3] YANG Z, DAI Z, YANG Y, et al. Xlnet: generalized autoregressive pretraining for language underst-anding[C] //Proceedings of NeurlPS. New York, USA: MIT Press, 2019: 5753-5763.
[4] JOSHI M, CHEN D, LIU Y, et al. Spanbert: improving pre-training by representing and predicting spans[J]. Transactions of the Association for Comp-utational Linguistics, 2019, 8(1): 64-77.
[5] WANG A, AMANPREET S, JULIAN M, et al. Glue: a multi-task benchmark and analysis platform for natural language understanding[C] //Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing(EMNLP). Brussels, Belgium: ACL, 2018: 353-355.
[6] DING M, ZHOU C, CHEN Q, et al. Cognitive graph for multi-hop reading comprehension at scale [C] //Proceedings of the 57th Conference of the Association for Computational Linguistics. Florence, Italy: ACL, 2019: 2694-2703.
[7] KOVALEVA O, ROMANOV A, ROGERS A, et al. Revealing the dark secrets of bert[C] //Proceedings of EMNLP-IJCNLP. Hong Kong, China: ACL, 2019: 4355-4365.
[8] RASTEGARI M, ORDONEZ V, REDMON J, et al. Xnor-net: imagenet classification using binary convolu-tional neural networks[C] //European Conference on Computer Vision. Amsterdam, the Netherlands: Springer, 2016: 525-542.
[9] HANG S, POOL J, TRAN J, et al. Learning both weights and connections for efficient neural network [C] //Proceedings of Neural Information Processing Systems(NeurIPS). New York, USA: MIT Press, 2015: 1135-1143.
[10] LI J, ZHAO R, HUANG J, et al. Learning small-size DNN with output-distribution-based criteria[C] //Proceedings of Interspeech. Lyon, France: Interspeech, 2014:1910-1914.
[11] HUANG G, LIU Z, VAN D, et al. Densely connected convolutional networks[C] //Proceedings of the IEEE Conference on Computer vision and Pattern Recognition. Hawaii, USA: IEEE, 2017: 4700-4708.
[12] YIM J, JOO D, BAE J, et al. A gift from knowledge distillation: fast optimization, network minimization and transfer learning[C] //Proceedings of CVPR. Hawaii, USA: IEEE, 2017: 7130-7138.
[13] WOO S, PARK J, LEE J Y, et al. Cbam: con-volutional block attention module[C] //Proceedings of the European Conference on Computer Vision(ECCV). Munich, Germany: Springer, 2018: 3-19.
[14] FURLANELLO T, LIPTON Z, TSCHANNEN M, et al. Born-again neural networks[C] //Proceedings of ICML. Stockholm, Sweden: ACM, 2018: 1602-1611.
[15] YANG C, XIE L, SU C, et al. Snapshot distillation: teacher-student optimization in one generation[C] //Proceedings of CVPR. Long Beach, USA: IEEE, 2019: 2859-2868.
[16] XU T, LIU C. Data-distortion guided self-distillation for deep neural networks[C] //Proceedings of AAAI. Hawaii, USA: AAAI, 2019: 5565-5572.
[17] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C] //Proceedings of NIPS. New York, USA: MIT Press, 2017: 5998-6008.
[18] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(NeurIPS). New York, USA: MIT Press, 2017: 770-778.
[19] QIU X, SUN T, XU Y, et al. Pre-trained models for natural language processing: a survey[J]. Science China Technological Sciences, 2020, 29(2): 1-26.
[20] BA L, CARUANA R. Do deep nets really need to be deep?[C] //Proceedings of Neural Information Processing Systems. New York, USA: MIT Press, 2013: 2654-2662.
[21] GLOROT X, BENGIO Y. Understanding the difficulty of training deep feedforward neural networks [C] //Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Sardinia, Italy: AAAI, 2010: 249-256.
[22] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks [C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hawaii, USA: IEEE 2017: 1125-1134.
[23] SARIKAYA R, HINTON G, DEORAS A. Application of deep belief networks for natural language under-standing[J]. IEEE/ACM Transactions on Audio Speech & Language Processing, 2014, 22(4): 778-784.
[24] LIU P, QIU X, HUANG X. Recurrent neural network for text classification with multi-task learning [C] //Proceedings of IJCAI. New York, USA: AAAI, 2016:168-175.
[1] 柴庆发,孙守晶,邱吉福,陈明,魏振,丛伟. 气象灾害条件下电网应急物资预测方法[J]. 山东大学学报 (工学版), 2021, 51(3): 76-83.
[2] 梁启星,李彬,李志,张慧,荣学文,范永. 基于模型预测控制的四足机器人斜坡自适应调整算法与实现[J]. 山东大学学报 (工学版), 2021, 51(3): 37-44.
[3] 廖锦萍,莫毓昌,YAN Ke. 基于C-LSTM的短期用电预测模型和应用[J]. 山东大学学报 (工学版), 2021, 51(2): 90-97.
[4] 周恺卿,李航程,莫礼平. 基于全局最优的自适应和声搜索算法[J]. 山东大学学报 (工学版), 2021, 51(2): 47-56.
[5] 程春蕊,毛北行. 一类非线性混沌系统的自适应滑模同步[J]. 山东大学学报 (工学版), 2020, 50(5): 1-6.
[6] 刘帅,王磊,丁旭涛. 基于Bi-LSTM的脑电情绪识别[J]. 山东大学学报 (工学版), 2020, 50(4): 35-39.
[7] 蔡国永,贺歆灏,储阳阳. 基于空间注意力和卷积神经网络的视觉情感分析[J]. 山东大学学报 (工学版), 2020, 50(4): 8-13.
[8] 王春彦,邸金红,毛北行. 基于新型趋近律的参数未知分数阶Rucklidge系统的滑模同步[J]. 山东大学学报 (工学版), 2020, 50(4): 40-45.
[9] 刘保成,朴燕,宋雪梅. 联合检测的自适应融合目标跟踪[J]. 山东大学学报 (工学版), 2020, 50(3): 51-57.
[10] 闫威,张达敏,张绘娟,辛梓芸,陈忠云. 基于混合决策的改进鸟群算法[J]. 山东大学学报 (工学版), 2020, 50(2): 34-43.
[11] 李春阳,李楠,冯涛,王朱贺,马靖凯. 基于深度学习的洗衣机异常音检测[J]. 山东大学学报 (工学版), 2020, 50(2): 108-117.
[12] 张胜男,王雷,常春红,郝本利. 基于三维剪切波变换和BM4D的图像去噪方法[J]. 山东大学学报 (工学版), 2020, 50(2): 83-90.
[13] 曹小洁,李小华,刘辉. 一类非仿射非线性大系统的结构在线扩展[J]. 山东大学学报 (工学版), 2020, 50(1): 35-48.
[14] 陈德蕾,王成,陈建伟,吴以茵. 基于门控循环单元与主动学习的协同过滤推荐算法[J]. 山东大学学报 (工学版), 2020, 50(1): 21-27,48.
[15] 苏佳林,王元卓,靳小龙,程学旗. 自适应属性选择的实体对齐方法[J]. 山东大学学报 (工学版), 2020, 50(1): 14-20.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] 王素玉,艾兴,赵军,李作丽,刘增文 . 高速立铣3Cr2Mo模具钢切削力建模及预测[J]. 山东大学学报(工学版), 2006, 36(1): 1 -5 .
[2] 张永花,王安玲,刘福平 . 低频非均匀电磁波在导电界面的反射相角[J]. 山东大学学报(工学版), 2006, 36(2): 22 -25 .
[3] 李梁,罗奇鸣,陈恩红. 对象级搜索中基于图的对象排序模型(英文)[J]. 山东大学学报(工学版), 2009, 39(1): 15 -21 .
[4] 陈瑞,李红伟,田靖. 磁极数对径向磁轴承承载力的影响[J]. 山东大学学报(工学版), 2018, 48(2): 81 -85 .
[5] 王波,王宁生 . 机电装配体拆卸序列的自动生成及组合优化[J]. 山东大学学报(工学版), 2006, 36(2): 52 -57 .
[6] 李可,刘常春,李同磊 . 一种改进的最大互信息医学图像配准算法[J]. 山东大学学报(工学版), 2006, 36(2): 107 -110 .
[7] 季涛,高旭,孙同景,薛永端,徐丙垠 . 铁路10 kV自闭/贯通线路故障行波特征分析[J]. 山东大学学报(工学版), 2006, 36(2): 111 -116 .
[8] 秦通,孙丰荣*,王丽梅,王庆浩,李新彩. 基于极大圆盘引导的形状插值实现三维表面重建[J]. 山东大学学报(工学版), 2010, 40(3): 1 -5 .
[9] 刘文亮,朱维红,陈涤,张泓泉. 基于雷达图像的运动目标形态检测及跟踪技术[J]. 山东大学学报(工学版), 2010, 40(3): 31 -36 .
[10] 张英,郎咏梅,赵玉晓,张鉴达,乔鹏,李善评 . 由EGSB厌氧颗粒污泥培养好氧颗粒污泥的工艺探讨[J]. 山东大学学报(工学版), 2006, 36(4): 56 -59 .