您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(工学版)》

山东大学学报 (工学版) ›› 2020, Vol. 50 ›› Issue (4): 28-34.doi: 10.6040/j.issn.1672-3961.0.2019.454

• • 上一篇    

基于类激活映射-注意力机制的图像描述方法

廖南星1,周世斌1*,张国鹏1,程德强2   

  1. 1. 中国矿业大学计算机科学与技术学院, 江苏 徐州 221116;2. 中国矿业大学孙越崎学院, 江苏 徐州 221116
  • 发布日期:2020-08-13
  • 作者简介:廖南星(1995— ),女,贵州铜仁人,硕士研究生,主要研究方向为图像描述生成方法. E-mail:nxliao@cumt.edu.cn. *通信作者简介: 周世斌(1970— ),男,安徽六安人,讲师,博士,主要研究方向为机器学习,计算机视觉. E-mail:zhoushibin@cumt.edu.cn
  • 基金资助:
    国家自然科学基金资助项目(61971421)

Image caption generation method based on class activation mapping and attention mechanism

LIAO Nanxing1, ZHOU Shibin1*, ZHANG Guopeng1, CHENG Deqiang2   

  1. 1. School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, Jiangsu, China;
    2. Sun Yueqi Honors College, China University of Mining and Technology, Xuzhou 221116, Jiangsu, China
  • Published:2020-08-13

摘要: 基于软注意力机制的图像描述算法,提出类激活映射-注意力机制的图像描述方法。利用类激活映射算法得到卷积特征包含定位以及更丰富的语义信息,使得卷积特征与图像描述具有更好的对应关系,解决卷积特征与图像描述的对齐问题,生成的自然语言描述能够尽可能完整的描述图像内容。选择双层长短时记忆网络改进注意力机制结构,使得新的注意力机制适合当前全局和局部信息的特征表示,能够选取合适的特征表示生成图像描述。试验结果表明,改进模型在诸多评价指标上优于软注意力机制等模型,其中在MSCOCO数据集上Bleu-4的评价指标相较于软注意力模型提高了16.8%。类激活映射机制可以解决图像空间信息与描述语义对齐的问题,使得生成的自然语言减少丢失关键信息,提高图像描述的准确性。

关键词: 图像描述, 注意力机制, 类激活映射, 卷积神经网络, 循环神经网络

Abstract: Class activation mapping-attention mechanism was introduced to soft attention based image caption framework. The class activation mapping mechanism introduced the position information to convolutional features with richer semantic information, where there was a better alignment between convolutional features and description words, so that the generated description could describe the image content more completely. Improved the attention mechanism with double layer of long short-term memory network made the attention mechanism suitable for global and local information for generating words with specific features. The experiments showed that the improved model could generate more accurate description and outperformed the performance of models such as the soft attention mechanism in many evaluation criteria, specially the bleu-4 result on the MSCOCO dataset increased 16.8% compared with the soft attention-based model, which showed class activation mapping-attention could align the word and the convolutional feature, and generate more accurate descriptions with less key information lost.

Key words: image caption, attention mechanism, class activation mapping, convolutional neural network, recurrent neural network

中图分类号: 

  • TP391
[1] KARPATHY A, LI F F. Deep visual-semantic alignments for generating image descriptions[C] // Proceedings of the IEEE conference on computer vision and pattern recognition. Boston, USA: IEEE, 2015: 3128-3137.
[2] VINYALS O, TOSHEV A, BENGIO S, et al. Show and tell: a neural image caption generator[C] // Proceedings of the IEEE conference on computer vision and pattern recognition. Boston, USA: IEEE, 2015: 3156-3164.
[3] XU K, BA J, KIROS R, et al. Show, attend and tell: Neural image caption generation with visual attention [C] // Proceedings of the International conference on machine learning. Lille, France: JMLR, 2015: 2048-2057.
[4] MAO J, XU W,YANG Y, et al. Deep captioning with multimodal recurrent neural networks(m-rnn)[C] // Proceedings of the International Conference on Learning Representations. San Diego, USA: ICLR, 2014: 13-29.
[5] ZHOU B, KHOSLA A, LAPEDRIZA A, et al. Learning deep features for discriminative localization[C] // Proceedings of the IEEE conference on computer vision and pattern recognition. Las Vegas, USA: IEEE, 2016: 2921-2929.
[6] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-cam:explanations from deep networks via gradient-based localization[C] // Proceedings of the IEEE International Conference on Computer Vision. Honolulu, USA: IEEE, 2017: 618-626.
[7] LIN M, CHEN Q, YAN S. Network in network[C] // Proceedings of the International Conference on Learning Representations. Banff, Canada: ICLR, 2013: 284-294.
[8] MNIH V, HEESS N, GRAVES A. Recurrent models of visual attention[C] // Proceedings of the Advances in Neural Information Processing Systems. Montreal, Canada: NIPS, 2014: 2204-2212.
[9] BAHDANAU D, CHOROWSKI J, SERDYUK D, et al. End-to-end attention-based large vocabulary speech recognition[C] // Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP). Shanghai, China: IEEE, 2016: 4945-4949.
[10] LU J, XIONG C, PARIKH D, et al. Knowing when to look: adaptive attention via a visual sentinel for image captioning[C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017: 375-383.
[11] YANG Z, ZHANG Y-J, UR REHMAN S, et al. Image captioning with object detection and localization[C] // Proceedings of the International Conference on Image and Graphics. Singapore: Springer, 2017: 109-118.
[12] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016: 770-778.
[13] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C] // Proceedings of the International Conferenceon Learning Representations.[S.l.] : ICLR, 2014: 4-11.
[14] ANDERSON P, HE X, BUEHLER C, et al. Bottom-up and top-down attention for image captioning and visual question answering[C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018: 6077-6086.
[15] PAPINENI K, ROUKOS S, WARD T, et al. BLEU: a method for automatic evaluation of machine translation[C] // Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Philadelphia, USA: ACL, 2002: 311-318.
[16] LIN C-Y. Rouge: a package for automatic evaluation of summaries[C] // Proceedings of the Text Summarization Branches out. Barcelona, Spain: ACL, 2004: 74-81.
[17] LAVIE A, AGARWAL A. METEOR: an automatic metric for MT evaluation with high levels of correlation with human judgments[C] // Proceedings of the Second Workshop on Statistical Machine Translation. Prague, Czech Republic: ACL, 2007: 228-231.
[18] VEDANTAM R, LAWRENCE ZITNICK C, PARIKH D. Cider: consensus-based imagedescription evaluation[C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015: 4566-4575.
[19] DONAHUE J, ANNE HENDRICKS L, GUADARRA-MA S, et al. Long-term recurrent convolutional networks for visual recognition and description[C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015: 2625-2634.
[20] KIROS R, SALAKHUTDINOV R, ZEMEL R. Multimodal Neural Language Models[C] // Proceedings of the Machine Learning Research. Bejing, China: PMLR, 2014: 595-603.
[1] 蔡国永, 贺歆灏, 储阳阳. 基于空间注意力和卷积神经网络的视觉情感分析[J]. 山东大学学报 (工学版), 2020, 50(4): 8-13.
[2] 李春阳,李楠,冯涛,王朱贺,马靖凯. 基于深度学习的洗衣机异常音检测[J]. 山东大学学报 (工学版), 2020, 50(2): 108-117.
[3] 宋士奇,朴燕,蒋泽新. 基于改进YOLOv3的复杂场景车辆分类与跟踪[J]. 山东大学学报 (工学版), 2020, 50(2): 27-33.
[4] 蔡国永,林强,任凯琪. 基于域对抗网络和BERT的跨领域文本情感分析[J]. 山东大学学报 (工学版), 2020, 50(1): 1-7,20.
[5] 侯霄雄,许新征,朱炯,郭燕燕. 基于AlexNet和集成分类器的乳腺癌计算机辅助诊断方法[J]. 山东大学学报 (工学版), 2019, 49(2): 74-79.
[6] 郭芳,陈蕾,杨子文. 基于MGU的大规模IP骨干网络实时流量预测[J]. 山东大学学报 (工学版), 2019, 49(2): 88-95.
[7] 权稳稳,林明星. CNN特征与BOF相融合的水下目标识别算法[J]. 山东大学学报 (工学版), 2019, 49(1): 107-113.
[8] 梁蒙蒙,周涛,夏勇,张飞飞,杨健. 基于PSO-ConvK卷积神经网络的肺部肿瘤图像识别[J]. 山东大学学报 (工学版), 2018, 48(5): 77-84.
[9] 张璞,刘畅,王永. 基于特征融合和集成学习的建议语句分类模型[J]. 山东大学学报 (工学版), 2018, 48(5): 47-54.
[10] 谢志峰,吴佳萍,马利庄. 基于卷积神经网络的中文财经新闻分类方法[J]. 山东大学学报(工学版), 2018, 48(3): 34-39.
[11] 赵彦霞, 王熙照. 基于SVD和DCNN的彩色图像多功能零水印算法[J]. 山东大学学报(工学版), 2018, 48(3): 25-33.
[12] 何正义,曾宪华,郭姜. 一种集成卷积神经网络和深信网的步态识别与模拟方法[J]. 山东大学学报(工学版), 2018, 48(3): 88-95.
[13] 徐姗姗,刘应安*,徐昇. 基于卷积神经网络的木材缺陷识别[J]. 山东大学学报(工学版), 2013, 43(2): 23-28.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!