您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(工学版)》

山东大学学报(工学版) ›› 2018, Vol. 48 ›› Issue (1): 50-56.doi: 10.6040/j.issn.1672-3961.0.2017.294

• • 上一篇    下一篇

一种基于DSmT推理的物品融合识别算法

唐乐爽,田国会*,黄彬   

  1. 山东大学控制科学与工程学院, 山东 济南 250061
  • 收稿日期:2017-06-09 出版日期:2018-02-20 发布日期:2017-06-09
  • 通讯作者: 田国会(1969— ),男,河北河间人,教授,博士,博士生导师,主要研究方向为云机器人、服务机器人、智能空间、类脑智能机器人. E-mail: g.h.tian@sdu.edu.cn E-mail:tls2010@mail.sdu.edu.cn
  • 作者简介:唐乐爽(1991— ),男,山东滕州人,硕士研究生,主要研究方向为机器人的物品识别、行为与理解.E-mail: tls2010@mail.sdu.edu.cn
  • 基金资助:
    国家自然科学基金资助项目(61773239);山东省自然科学基金资助项目(ZR2015FM007);“泰山学者”工程山东省专项经费资助

An object fusion recognition algorithm based on DSmT

TANG Leshuang, TIAN Guohui*, HUANG Bin   

  1. School of Control Science and Engineering, Shandong University, Jinan 250061, Shandong, China
  • Received:2017-06-09 Online:2018-02-20 Published:2017-06-09

摘要: 针对目前提升深度模型分类表现方法存在的硬件性能不足、结构创新不易、训练样本有限等问题,提出一种基于DSmT(Dezert-Smarandache)推理的物品融合识别算法。对于待识别目标,应用数据融合思想将来自不同深度学习模型提供的识别信息进行融合处理。利用已有的预训练深度学习模型,根据分类识别任务进行特定的微调;针对DSmT理论中构造信度赋值困难的问题,使用深度学习网络对图像的判别输出进行证据源信度赋值;在决策级层运用DSmT组合理论对信度赋值融合处理,进而实现物品的准确识别。在不改变网络模型结构与同一数据集的情况下,将提出的方法与单一网络模型和平均值处理方法进行对比测试试验。试验结果表明,该方法可以有效地提高物品图像的识别率。

关键词: 信息融合, 深度学习, 深度神经网络, DSmT推理, 物品识别

Abstract: Aimed at improving the performance of the depth model in image classification currently, i.e. the inadequate performance of existing hardware, difficulty in structural innovation and the limited training samples, an object fusion recognition algorithm based on DSmT(Desert-Smarandache theory)was proposed. The recognition information of objects was collected and fused from different learning network models. The pretrained depth learning models were fine-tuned according to the classification task. To solve the problem in the construction of the basic belief assignment(BBA)in DSmT, the models were used to assign the BBA to the evidence sources. The DSmT combination theory was used in the fusion of the decision-layer in order to raise the recognition rate. Under the conditions of unchanged network models and the dataset, the multi-model fusion method with the single-model and average value method were compared in the experiments. The results of the experiments showed that the algorithm could improve correct recognition ratio effectively under the same conditions.

Key words: deep learning, information fusion, object recognition, Dezert-Smarandache theory, deep neural network

中图分类号: 

  • TP242.6
[1] 韩小虎, 徐鹏, 韩森森. 深度学习理论综述[J]. 计算机时代, 2016(6):107-110. HAN Xiaohu, XU Peng, HAN Sensen.Theoretical overview of deep learning[J]. Compute Era, 2016(6):107-110.
[2] JIA Y, SHELHAMER E, DONAHUE J, et al. CAFFE: convolutional architecture for fast feature embedding[C] //Proceedings of the 22nd ACM International Conference on Multimedia. New York, USA: ACM, 2014:675-678.
[3] 伍家松, 达臻, 魏黎明,等. 基于分裂基-2/(2a)FFT算法的卷积神经网络加速性能的研究[J]. 电子与信息学报, 2017, 39(2):285-292. WU Jiasong, DA Zhen, WEI Liming, et al. Acceleration performance study of convolutional neural network based on split-radix-2/(2a)FFT algorithms[J]. Journal of Electronics & Information Technology, 2017, 39(2):285-292.
[4] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[C] //Advances in Neural Information Processing Systems. New York, USA: Curran Associates, 2012:1097-1105.
[5] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: CVPR, 2015:1-9.
[6] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C] //International Conference on Learning Representations. San Diego, USA: ICLR, 2015:1-5.
[7] SUZUKI S, SHOUNO H. A study on visual interpretation of network in network[C] //International Joint Conference on Neural Networks. Anchorage, USA: IJCNN, 2017:903-910.
[8] 卢宏涛, 张秦川. 深度卷积神经网络在计算机视觉中的应用研究综述[J]. 数据采集与处理, 2016, 31(1):1-17. LU Hongtao, ZHANG Qinchuan. Application of deep convolutional neural network in computer vision[J]. Journal of Data Acquisition and Processing, 2016, 31(1):1-17.
[9] RUSSAKOVSKY O, DENG J, SU H, et al. Imagenet large scale visual recognition challenge [J]. International Journal of Computer Vision, 2015, 115(3): 211-252.
[10] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C] //Computer Vision and Pattern Recognition. Las Vegas, USA: CVPR, 2016:770-778.
[11] HUANG G, LIU Z, WEINBERGER K Q, et al. Densely connected convolutional networks[C] //Computer Vision and Pattern Recognition. Hawaii, USA: CVPR, 2017:1-5.
[12] DEZERT J. Foundations of a new theory of plausible and paradoxical reasoning[J]. Information & Security Journal, 2002, 9(1):13-57.
[13] DEZERT J, SMARANDACHE F. On the generation of hyper-powersets for the DSmT[C] //Proceedings of the 6th International Conference of Information Fusion. Cairns, Australia: ICIF, 2005:1118-1125.
[14] LI X, DEZERT J, SMARANDACHE F, et al. Evidence supporting measure of similarity for reducing the complexity in information fusion [J]. Information Sciences, 2011, 181(10): 1818-1835.
[15] LI Xinde, JEAN D, HUANG X H, et al. A fast approximate reasoning method in hierarchical DSmT(A)[J]. Acta Electronica Sinica, 2010, 38(11):2566-2572.
[16] SMARANDACHE F, DEZERT J. Information fusion based on new proportional conflict redistribution rules[C] //International Conference on Information Fusion. Stockholm, Sweden: ICIF, 2006:8 pp.
[17] 郭强, 何友. 基于云模型的DSm证据建模及雷达辐射源识别方法[J]. 电子与信息学报, 2015, 37(8):1779-1785. GUO Qiang, HE You. DSm evidence modeling and radar emitter fusion recognition method based on cloud model[J]. Journal of Electronics & Information Technology, 2015, 37(8):1779-1785.
[18] 王霞, 田亮. 基于典型样本的信度函数分配的构造方法[J]. 电力科学与工程, 2015(5):11-15. WANG Xia, TIAN Liang. Method of constructing confidence function distribution based on typical sample[J]. Electric Power Science and Engineering, 2015(5):11-15.
[19] 李新德, 杨伟东. 一种飞机图像目标多特征信息融合识别方法[J]. 自动化学报, 2012, 38(8): 1298-1307. LI Xinde, YANG Weidong, DEZERT J. An airplane image targets multi-feature fusion recognition method[J]. Acta Automatica Sinica, 2012, 38(8): 1298-1307.
[20] 李新德, 杨伟东, 吴雪建,等. 一种快速分层递阶DSmT近似推理融合方法(B)[J]. 电子学报, 2011, 39(a03):31-36. LI Xinde, YANG Weidong, WU Xuejian, et al. A fast approximate reasoning method in hierarchical DSmT(B)[J]. Acta Electronica Sinica, 2011, 39(a03):31-36.
[1] 沈冬冬,周风余,栗梦媛,王淑倩,郭仁和. 基于集成深度神经网络的室内无线定位[J]. 山东大学学报(工学版), 2018, 48(5): 95-102.
[2] 谢志峰,吴佳萍,马利庄. 基于卷积神经网络的中文财经新闻分类方法[J]. 山东大学学报(工学版), 2018, 48(3): 34-39.
[3] 周志杰,赵福均,胡昌华,王力,冯志超,刘涛源. 基于证据推理的航天继电器故障预测方法[J]. 山东大学学报(工学版), 2017, 47(5): 22-29.
[4] 周福娜,高育林,王佳瑜,文成林. 基于深度学习的缓变故障早期诊断及寿命预测[J]. 山东大学学报(工学版), 2017, 47(5): 30-37.
[5] 何正义,曾宪华,曲省卫,吴治龙. 基于集成深度学习的时间序列预测模型[J]. 山东大学学报(工学版), 2016, 46(6): 40-47.
[6] 刘帆,陈泽华,柴晶. 一种基于深度神经网络模型的多聚焦图像融合方法[J]. 山东大学学报(工学版), 2016, 46(3): 7-13.
[7] 李发权, 杨立才, 颜红博. 基于PCA-SVM多生理信息融合的情绪识别方法[J]. 山东大学学报(工学版), 2014, 44(6): 70-76.
[8] 郑毅, 朱成璋. 基于深度信念网络的PM2.5预测[J]. 山东大学学报(工学版), 2014, 44(6): 19-25.
[9] 沈晓晶, 陈明, 池涛. 多Agent水质监控系统中的信息融合算法[J]. 山东大学学报(工学版), 2014, 44(4): 39-45.
[10] 李景辉,杨立才*. 基于多传感器信息融合的人体姿态解算算法[J]. 山东大学学报(工学版), 2013, 43(5): 49-54.
[11] 孙甲冰1,2,张承进1*. 有丢包的随机不确定参数系统的最优融合滤波[J]. 山东大学学报(工学版), 2011, 41(6): 59-65.
[12] 张玫1,3,符晓玲2,崔鹏1. 带有乘性噪声的时滞系统多传感器信息融合[J]. 山东大学学报(工学版), 2010, 40(6): 17-23.
[13] 杨立才,叶杨,聂红涛,刘慧慧,林洁 . 基于免疫模糊聚类RBF网络的交通信息融合算法[J]. 山东大学学报(工学版), 2008, 38(5): 1-5 .
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!