您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(工学版)》

山东大学学报(工学版) ›› 2016, Vol. 46 ›› Issue (3): 14-22.doi: 10.6040/j.issn.1672-3961.0.2015.316

• • 上一篇    下一篇

基于L1范数和最小软阈值均方的目标跟踪算法

王海军1,2,葛红娟1,张圣燕2   

  1. 1. 南京航空航天大学民航学院, 江苏 南京 211106;2. 滨州学院山东省高校航空信息技术重点实验室, 山东 滨州 256603
  • 收稿日期:2015-04-07 出版日期:2016-06-30 发布日期:2015-04-07
  • 作者简介:王海军(1980— ), 男, 山东烟台人, 讲师, 博士研究生, 主要研究方向为目标跟踪. E-mail: whjlym@163.com
  • 基金资助:
    山东省自然科学基金资助项目(ZR2015FL009);滨州市科技发展计划资助项目(2013ZC0103);滨州学院科研基金资助项目(BZXYG1524,BZXYG1318)

Object tracking via L1 norm and least soft-threshold square

WANG Haijun1,2, GE Hongjuan1, ZHANG Shengyan2   

  1. 1. College of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, Jiangsu, China;
    2.Key Laboratory of Aviation Information Technology in University of Shandong, Binzhou University, Binzhou 256603, Shandong, China
  • Received:2015-04-07 Online:2016-06-30 Published:2015-04-07

摘要: 基于传统稀疏表示的目标跟踪算法无法解决跟踪过程出现的遮挡及运动模糊等问题,提出一种基于L1范数和最小软阈值均方的目标跟踪算法。首先用主成分分析(principal component analysis, PCA)基向量建模跟踪目标的表观变化,同时对表示系数进行L1范数约束;其次对误差项采用最小软阈值方法进行显示求解,同时对观测模型的更新上考虑跟踪目标的遮挡因素;最后在贝叶斯框架下搭建目标跟踪算法。在14个具有挑战性的跟踪视频上的试验结果表明:与其他算法相比,本研究能够克服跟踪过程中遮挡、角度变化、尺度变化、光照变化等影响跟踪性能的因素,具有较高的平均覆盖率和较低的平均中心点误差。

关键词: L1范数, 观测模型, 最小软阈值均方, 稀疏表示, 目标跟踪

Abstract: Due to the occlusion and motion blur in the traditional object tracking algorithm, a novel object tracking algorithm via L1 norm and least soft-threshold square was proposed to solve the problem of the failure of object tracking based on sparse representation. Firstly, the appearances of the object were modeled by the PCA(Principal Component Analysis)basis vector and the representation coefficients were constrained by L1 norm. Secondly, the trivial error was solved by the least soft-threshold square and the occlusion factor was taken account in the updation of the observation model. At last, the object tracking algorithm was developed in the Bayesian inference framework. Experiments were conducted on fourteen challenging videos and the experimental results showed that the proposed algorithm could cope well with the occlusion, angle variation, scale variation and illumination variation, with the higher average overlap rate and the lower average center point error, compared with the other tracking algorithm.

Key words: sparse representation, object tracking, L1 norm, least soft-threshold square, observation model

中图分类号: 

  • TP391
[1] 袁广林, 薛模根. 基于疏度约束与动态组结构稀疏编码的鲁棒视觉跟踪[J]. 电子学报, 2015, 43(8):1499-1505. YUAN Guanglin, XUE Mogen. Sparsity-constrained and dynamic group structured spasre coding for robust visual tracking[J]. Acta Electronica Sinica, 2015, 43(8):1499-1505.
[2] 王海军, 张圣燕. 基于L2范数和增量正交投影非负矩阵分解的目标跟踪算法[J]. 黑龙江大学自然科学学报, 2015, 32(2):262-269. WANG Haijun, ZHANG Shengyan. Object tracking algorithm via L2 norm and incremental orthogonal projective non-negative matrix factorization[J]. Journal of Natural Science of Heilongjiang University, 2015, 32(2):262-269.
[3] 王海军, 葛红娟, 张圣燕. 在线低秩表示的目标跟踪算法[J]. 西安电子科技大学学报(自然科学版), 2016, 43(5):112-118. WANG Haijun, GE Hongjuan, ZHANG Shengyan. Object tracking via online low rank representation[J]. Journal of Xidian University(Natural Science), 2016, 43(5):112-118.
[4] WANG D, LU H C. Fast and robust object tracking via probability continuous outlier model[J]. IEEE Transactions on Image Processing, 2015, 24(12):5166-5176.
[5] WANG D, LU H C, BO CH J. Visual tracking via weighted local cosine similarity[J]. IEEE Transactions On Systems, Man, and Cybernetics Part B, 2015, 45(9):1838-1850.
[6] ZHANG K H, ZHANG L, YANG M H. Fast compressive tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(10):2002-2015.
[7] ZHANG K H, ZHANG L, YANG M H, et al. Robust object tracking via active feature selection[J]. IEEE Transactions. Circuits and Systems for Video Technology, 2013, 23(11):1957-1967.
[8] ZHANG K H, ZHANG L, YANG M H. Real-time object tracking via online discriminative feature selection[J]. IEEE Transactions on Image Processing, 2013, 22(12):4664-4677.
[9] YANG F, LU H C, YANG M S. Robust superpixel tracking[J]. IEEE Transactions on Image Processing, 2014, 23(4):1639-1651.
[10] LIU R S, BAI S S, SU Z X, el al. Robust visual tracking via L0 regularized local low-rank feature learning[J]. Journal of Electronic Imaging, 2015, 24(3):033012.
[11] WANG D, LU H C. Online visual tracking via two view sparse representation[J]. IEEE Signal Processing Letters, 2014, 21(9):1031-1034.
[12] ZHUANG B H, LU H C, XIAO Z Y, et al. Visual tracking via discriminative sparse similarity map[J]. IEEE Transactions on Image Processing, 2014, 23(4):1872-1881.
[13] WANG D, LU H C, XIAO Z Y, et al. Inverse sparse tracker with a locally weighted distance metric[J]. IEEE Transactions on Image Processing, 2015, 24(9):2646-2657.
[14] ROSS D, LIM J, LIN R S, et al. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision, 2008, 77(1-3):125-141.
[15] XUE M, LING H B. Robust visual tracking using L1 minimization[C] //Proceedings of the IEEE Computer Society Conference on Computer Vision. Kyoto, Japan:IEEE Computer Society, 2009:1436-1443.
[16] BAO C L, WU Y, LING H B, et al. Real time robust L1 tracker using accelerated proximal gradient approach[C] //Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Providence, Rhode Island, USA:IEEE Computer Society, 2012:1830-1837.
[17] WANG D, LU H C, YANG M H. Online object tracking with sparse prototypes[J]. IEEE Transactions on Image Processing, 2013, 22(1):314-325.
[18] ADAM A, RIVLIN, E. Robust fragments-based tracking using the integral histogram[C] //Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, USA:IEEE Computer Society, 2006:798-805.
[19] KWON J, LEE K M. Visual tracking decomposition[C] //Proceedings of the IEEE computer society conference on computer vision and pattern recognition. San Francisco, USA:IEEE Computer Society, 2010:1269-1276.
[20] ZHANG T Z, GHANEM B, LIU S. Robust visual tracking via multi-task sparse learning[C] //Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Providence, Rhode Island, USA:IEEE Computer Society, 2012:2042-2049.
[1] 杨瑞. 基于稀疏表示的间歇故障检测方法及仿真[J]. 山东大学学报(工学版), 2017, 47(5): 51-56.
[2] 马帅依凡,赵子健. 基于人工标记的手术导航仪[J]. 山东大学学报(工学版), 2017, 47(3): 63-68.
[3] 郭志波, 董健, 庞成. 多技术融合的Mean-Shift目标跟踪算法[J]. 山东大学学报(工学版), 2015, 45(2): 10-16.
[4] 葛凯蓉, 常发亮, 董文会. 基于局部敏感直方图的稀疏表达跟踪算法[J]. 山东大学学报(工学版), 2014, 44(5): 14-19.
[5] 李武,侯志强*,魏国剑,余旺盛. 跟踪框自适应的尺度变化目标跟踪算法[J]. 山东大学学报(工学版), 2014, 44(2): 28-34.
[6] 邱晓欣1,2,张文强1,2*,秦晋贤1,2,杜正阳1,2,张德峰1,2. 恶劣环境下多目标实时跟踪算法研究[J]. 山东大学学报(工学版), 2014, 44(2): 21-27.
[7] 夏海英1,杜海明2,徐鲁辉1,颜远辉1. 基于自适应词典学习和稀疏表示的人脸表情识别[J]. 山东大学学报(工学版), 2014, 44(1): 45-48.
[8] 林哲1,闫敬文2,袁野2. 基于稀疏表示和PCNN的多模态图像融合[J]. 山东大学学报(工学版), 2013, 43(4): 13-17.
[9] 方 挺,杨 忠,沈春林 . 无人机编队视频序列中的多目标精确跟踪[J]. 山东大学学报(工学版), 2008, 38(4): 22-26 .
[10] 马丽,常发亮,乔谊正 . 基于遗传算法和粒子滤波器的非刚性目标跟踪[J]. 山东大学学报(工学版), 2006, 36(3): 26-29 .
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!