您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(工学版)》

山东大学学报 (工学版) ›› 2020, Vol. 50 ›› Issue (4): 8-13.doi: 10.6040/j.issn.1672-3961.0.2019.422

• 机器学习与数据挖掘 • 上一篇    下一篇

基于空间注意力和卷积神经网络的视觉情感分析

蔡国永(),贺歆灏,储阳阳   

  1. 桂林电子科技大学广西可信软件重点试验室, 广西 桂林 541004
  • 收稿日期:2019-07-23 出版日期:2020-08-20 发布日期:2020-08-13
  • 作者简介:蔡国永(1971—),男,广西河池人,教授,博士,主要研究方向为社交媒体数据挖掘. E-mail:ccgycai@guet.edu.cn
  • 基金资助:
    国家自然科学基金资助项目(61763007);广西自然科学基金重点资助项目(2017JJD160017);广西科技重大专项(AA19046004)

Visual sentiment analysis based on spatial attention mechanism and convolutional neural network

Guoyong CAI(),Xinhao HE,Yangyang CHU   

  1. Guangxi Key Lab of Trusted Software, Guilin University of Electronic Technology, Guilin 541004, Guangxi, China
  • Received:2019-07-23 Online:2020-08-20 Published:2020-08-13

摘要:

为了解决现有基于深度学习方法的视觉情感分析忽略了图像各局部区域情感呈现的强度差异问题,提出一种结合空间注意力的卷积神经网络spatial attention with CNN, SA-CNN用于提升视觉情感分析效果。设计一个情感区域探测神经网络用于发现图像中诱发情感的局部区域;通过空间注意力机制对情感映射中各个位置赋予注意力权重,恰当抽取各区域的情感特征表示,从而有助于利用局部区域情感信息进行分类;整合局部区域特征和整体图像特征形成情感判别性视觉特征,并用于训练视觉情感的神经网络分类器。该方法在3个真实数据集TwitterⅠ、TwitterⅡ和Flickr上的情感分类准确率分别达到82.56%、80.23%、79.17%,证明利用好图像局部区域情感表达的差异性,能提升视觉情感分类效果。

关键词: 图像处理, 情感分析, 深度学习, 注意力机制, 神经网络

Abstract:

Existing visual sentiment analysis based on deep learning mainly ignored the intensity differences of emotional presentation in different parts of the image. In order to solve this problem, the convolutional neural network based on spatial attention (SA-CNN) was proposed to improve the effect of visual sentiment analysis. The affective region detection neural network was designed to discover the local areas of sentiment induced in images. The spatial attention mechanism was used to assign attention weights to each location in the sentiment map, and the sentiment features of each region were extracted appropriately, which was helpful for sentiment classification by using local information. The discriminant visual features were formed by integrating local region features and global image features, and were used to train the neural network classifier of visual sentiment. Classification accuracy of the method achieved 82.56%, 80.23% and 79.17% on three real datasets Twitter Ⅰ, Twitter Ⅱ and Flickr, which proved that the method could improve the visual emotion classification effect by making good use of the difference of emotion expression in the local area of the image.

Key words: image process, sentiment analysis, deep learning, attention mechanism, neural network

中图分类号: 

  • TP391

图1

基于空间注意力机制和卷积神经网络的视觉情感分析"

图2

残差网络中的残差模块示意图"

图3

TwitterⅠ、TwitterⅡ数据集上不同方法的分类准确率"

图4

Flickr数据集上不同方法的分类准确率"

1 YANG J, SHE D, LAI Y K, et al. Weakly supervised coupled networks for visual sentiment analysis[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City, USA: IEEE Press, 2018: 7584-7592.
2 JIN X, GALLAGHER A, CAO L, et al. The wisdom of social multimedia: using flickr for prediction and forecast[C]//Proceedings of the 2010 International Conference on Multimedea 2010.Firenze, Italy: ACM Press, 2010: 1235-1244.
3 YUAN J, MCDONOUGH S, YOU Q, et al. Sentribute: image sentiment analysis from a mid-level perspective[C]//Proceedings of the 2013 International Workshop on Issues of Sentiment Discovery and Opinion Mining. Chicago, USA: ACM Press, 2013: 1-8.
4 MACHAJDIK J, HANBURY A. Affective image classification using features inspired by psychology and art theory[C]//Proceedings of the 18th ACM international conference on Multimedia. Firenze, Italy: ACM Press, 2010: 83-92.
5 CHEN M, ZHANG L, ALLEBACH J P. Learning deep features for image emotion classification[C]//Procee-dings of the 2015 IEEE International Conference on Image Processing. Piscataway, USA: IEEE Press, 2015: 4491-4495.
6 YOU Q, YANG J, YANG J, et al. Building a large scale dataset for image emotion recognition: the fine print and the benchmark[C]//Proceedings of the 2016 Thirtieth AAAI Conference on Artificial Intelligence. Phoenix, USA: AAAI Press, 2016: 308-314.
7 YANG J , SHE D , SUN M , et al. Visual sentiment prediction based on automatic discovery of affective regions[J]. IEEE Transactions on Multimedia, 2018, 20 (9): 2513- 2525.
8 SIERSDORFER S, MINACK E, DENG F, et al. Analyzing and predicting sentiment of images on the social web[C]//Proceedings of the 18th ACM international conference on Multimedia. Firenze, Italy: ACM Press, 2010: 715-718.
9 BORTH D, JI R, CHEN T, et al. Large-scale visual sentiment ontology and detectors using adjective noun pairs[C]//Proceedings of the 21st ACM international conference on Multimedia. Barcelona, Spain: ACM Press, 2013(9): 223-232.
10 CHEN T , BORTH D , DARRELL T , et al. DeepSentiBank: visual sentiment concept classification with deep convolutional neural networks[J]. OALIB Journal-Computer Science, 2014, 1- 6.
11 YOU Q, YANG J, YANG J, et al. Robust image sentiment analysis using progressively trained and domain transferred deep networks[C]// Twenty-Ninth AAAI Conference on Artificial Intelligence. Austin, USA: AAAI Press, 2015: 381-388.
12 SUN M, YANG J, WANG K, et al. Discovering affective regions in deep convolutional neural networks for visual sentiment prediction[C]// 2016 IEEE International Conference on Multimedia and Expo (ICME). Seattle, USA: IEEE Press, 2016: 1-6.
13 LI B, XIONG W, HU W, et al. Context-aware affective images classification based on bilayer sparse repres-entation[C]// Proceedings of the 20th ACM international conference on Multimedia. Nara, Japan: ACM Press, 2012: 721-724.
14 ITTI L , KOCH C . Computational modelling of visual attention[J]. Nature reviews neuroscience, 2001, 2 (3): 194- 195.
15 VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in neural information processing systems. California, USA: MIT Press, 2017: 5998-6008.
16 LI L, TANG S, DDENG L, et al. Image caption with global-local attention[C]//Thirty-First AAAI Conference on Artificial Intelligence 2017. San Francisco, USA: AAAI Press.
17 MNIN V, HEESS N, GRAVES A. Recurrent models of visual attention[C]//Advances in Neural Information Processing Systems. Montreal, Canada: MIT Press, 2014: 2204-2212.
18 CHEN L C, YANG Y, WANG J, et al. Attention to scale: scale-aware semantic image segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. Las Vegas, USA: IEEE Press, 2016: 3640-3649.
19 HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA.: IEEE Press, 2016: 770-778.
20 SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. (2015-04-10)[2019-02-01]. http://arxiv.org/abs/1410.8586v.
[1] 黄芳,王欣,高国海,沈玲珍,付勋,方宇. 融合主客观评价的图数据Top-k频繁模式挖掘[J]. 山东大学学报 (工学版), 2025, 55(6): 1-12.
[2] 王禹鸥,苑迎春,何振学,何晨. 融合多特征和多头自注意力机制的高校学业命名实体识别[J]. 山东大学学报 (工学版), 2025, 55(6): 35-44.
[3] 邵孟伟,袁世飞,周宏志,王乃华. 基于BP神经网络和遗传算法的翅片管结构优化[J]. 山东大学学报 (工学版), 2025, 55(6): 76-82.
[4] 李常刚,李宝亮,曹永吉,王佳颖. 人工智能在电力系统潮流计算中的应用综述及展望[J]. 山东大学学报 (工学版), 2025, 55(5): 1-17.
[5] 邓彬, 张宗包, 赵文猛, 罗新航, 吴秋伟. 基于云边协同和图神经网络的电动汽车充电站负荷预测方法[J]. 山东大学学报 (工学版), 2025, 55(5): 62-69.
[6] 周群颖,隋家成,张继,王洪元. 基于自监督卷积和无参数注意力机制的工业品表面缺陷检测[J]. 山东大学学报 (工学版), 2025, 55(4): 40-47.
[7] 薛冰冰,王勇,杨维浩,王川,于迪,王旭. 基于ETC收费数据的高速公路交通流数据修复及实时预测[J]. 山东大学学报 (工学版), 2025, 55(3): 58-71.
[8] 董明书,陈俐企,马川义,张珠皓,孙仁娟,管延华,庄培芝. 沥青路面内部裂缝雷达图像智能判识算法研究[J]. 山东大学学报 (工学版), 2025, 55(3): 72-79.
[9] 李丰,文益民. 融合多尺度视觉和文本语义特征的图像描述生成算法[J]. 山东大学学报 (工学版), 2025, 55(3): 80-87.
[10] 贾轩,许吉凯,任艺婧,刘德才,许强,张利. 基于样本扩容和数据驱动的台区理论线损计算方法[J]. 山东大学学报 (工学版), 2025, 55(3): 158-164.
[11] 祝明,石承龙,吕潘,刘现荣,孙驰,陈建城,范宏运. 基于优化长短时记忆网络的深基坑变形预测方法及其工程应用[J]. 山东大学学报 (工学版), 2025, 55(3): 141-148.
[12] 王禹鸥,苑迎春,何振学,王克俭. 改进RoBERTa、多实例学习和双重注意力机制的关系抽取方法[J]. 山东大学学报 (工学版), 2025, 55(2): 78-87.
[13] 李伟豪,王苹苹,许万博,魏本征. 结构先验引导的多模态腰椎MRI图像分割算法[J]. 山东大学学报 (工学版), 2025, 55(1): 66-76.
[14] 孙尚渠,张恭禄,蒋志斌,李朝阳. 盾构滚刀磨损的影响因素敏感性分析及预测[J]. 山东大学学报 (工学版), 2025, 55(1): 86-96.
[15] 鲁志恒,霍延强,韩汶,杜聪,刘轶鹏,张宏博. 基于图像数据和碎石集料级配与用量的碎石集料空隙率快速检测方法[J]. 山东大学学报 (工学版), 2024, 54(6): 89-99.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] 王素玉,艾兴,赵军,李作丽,刘增文 . 高速立铣3Cr2Mo模具钢切削力建模及预测[J]. 山东大学学报(工学版), 2006, 36(1): 1 -5 .
[2] 张永花,王安玲,刘福平 . 低频非均匀电磁波在导电界面的反射相角[J]. 山东大学学报(工学版), 2006, 36(2): 22 -25 .
[3] 李 侃 . 嵌入式相贯线焊接控制系统开发与实现[J]. 山东大学学报(工学版), 2008, 38(4): 37 -41 .
[4] 孔祥臻,刘延俊,王勇,赵秀华 . 气动比例阀的死区补偿与仿真[J]. 山东大学学报(工学版), 2006, 36(1): 99 -102 .
[5] 来翔 . 用胞映射方法讨论一类MKdV方程[J]. 山东大学学报(工学版), 2006, 36(1): 87 -92 .
[6] 余嘉元1 , 田金亭1 , 朱强忠2 . 计算智能在心理学中的应用[J]. 山东大学学报(工学版), 2009, 39(1): 1 -5 .
[7] 陈瑞,李红伟,田靖. 磁极数对径向磁轴承承载力的影响[J]. 山东大学学报(工学版), 2018, 48(2): 81 -85 .
[8] 李可,刘常春,李同磊 . 一种改进的最大互信息医学图像配准算法[J]. 山东大学学报(工学版), 2006, 36(2): 107 -110 .
[9] 季涛,高旭,孙同景,薛永端,徐丙垠 . 铁路10 kV自闭/贯通线路故障行波特征分析[J]. 山东大学学报(工学版), 2006, 36(2): 111 -116 .
[10] 浦剑1 ,张军平1 ,黄华2 . 超分辨率算法研究综述[J]. 山东大学学报(工学版), 2009, 39(1): 27 -32 .