您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(工学版)》

山东大学学报 (工学版) ›› 2025, Vol. 55 ›› Issue (4): 72-83.doi: 10.6040/j.issn.1672-3961.0.2025.004

• 机器学习与数据挖掘 • 上一篇    

基于LVI-SAM-Stereo的多传感器融合室内外建图定位

蒋风洋1,2,程瑶1,2*,韩哲2,王怀震1,2,周风余3,董磊2   

  1. 1.山东新一代信息产业技术研究院有限公司, 山东 济南 250102;2.浪潮智能终端有限公司, 山东 济南 250101;3.山东大学控制科学与工程学院, 山东 济南 250061
  • 发布日期:2025-08-31
  • 作者简介:蒋风洋(1997— ),男,山东济南人,硕士,主要研究方向为机器人建图定位导航. E-mail: jiangfy@inspur.com. *通信作者简介:程瑶(1985— ),女,山东济南人,正高级工程师,博士,主要研究方向为机器人建图定位导航. E-mail: chengyao01@inspur.com
  • 基金资助:
    山东省重点研发计划(竞争性创新平台)资助项目(2023CXPT094);山东省重点研发计划(重大科技创新工程)资助项目(2024CXGC010213);济南市第二批市校融合发展战略工程资助项目(JNSX2023012)

The multi-sensor fusion mapping and relocalization based on LVI-SAM-Stereo in indoor and outdoor scenes

JIANG Fengyang1,2, CHENG Yao1,2*, HAN Zhe2, WANG Huaizhen1,2, ZHOU Fengyu3, DONG Lei2   

  1. JIANG Fengyang1, 2, CHENG Yao1, 2*, HAN Zhe2, WANG Huaizhen1, 2, ZHOU Fengyu3, DONG Lei2(1. Shandong New Generation Information Industrial Technology Research Institute, Jinan 250102, Shandong, China;
    2. Inspur Intelligent Terminal Co., Ltd., Jinan 250101, Shandong, China;
    3. School of Control Science and Engineering, Shandong University, Jinan 250061, Shandong, China
  • Published:2025-08-31

摘要: 针对机器人在室内外建图定位精度低、场景适应性差的问题,提出一种基于双目相机的紧耦合激光雷达(light detection and ranging, LiDAR)视觉惯性里程计平滑建图定位(tightly-coupled LiDAR-visual-inertial odometry via smoothing, mapping, and relocalization by stereo, LVI-SAM-Stereo)方法。采用点线和点面距离构建激光雷达惯性位姿估计模型;利用多传感器信息交互实现双目惯性里程计快速初始化,基于最小化重投影误差优化里程计位姿;提出融合Scan-Context与视觉特征的跨模态回环检测机制,有效减少错误回环;构建重定位双向优化架构,将因子图优化的里程计信息用作视觉跟踪的初始位姿估计,基于多点透视(perspective-n-point, PnP)求解视觉位姿辅助激光雷达点云配准。通过数据集和真实场景的大量试验,相较于紧耦合激光雷达惯性里程计平滑建图(tightly-coupled LiDAR inertial odometry via smoothing and mapping, LIO-SAM)方法和紧耦合激光雷达视觉惯性里程计平滑建图(tightly-coupled LiDAR-visual-inertial odometry via smoothing and mapping, LVI-SAM)方法,LVI-SAM-Stereo方法在室外场景的建图精度分别提升3.10%和5.97%,在室内场景的平均漂移分别降低72.7%和43.05%,建图精度和场景适应性显著提升。重定位满足机器人自主导航的工程需求。

关键词: 机器人, 多传感器融合, 视觉惯性里程计, 回环检测, 重定位

Abstract: Aiming at the problems of low mapping and relocalization accuracy, as well as poor scene adaptability, for robots in indoor and outdoor scenes, a tightly-coupled light detection and ranging(LiDAR)-visual-inertial odometry via smoothing, mapping, and relocalization by stereo(LVI-SAM-Stereo)method was proposed. The LiDAR-inertial pose estimation model was constructed by utilizing point-line and point-plane distances. Multi-sensor information interaction enabled rapid initialization of stereo-inertial odometry, with the odometry pose being optimized through reprojection error minimization. A cross-modal loop closure detection mechanism combining Scan-Context with visual features effectively reduced incorrect loop closures. A bidirectional relocalization architecture was developed, where factor graph-optimized odometry provided initial pose estimation for visual tracking, while perspective-n-point(PnP)-derived visual poses assisted LiDAR point cloud registration. A thorough evaluation with both datasets and real-world experiments verified that LVI-SAM-Stereo achieved 3.10% and 5.97% higher outdoor mapping accuracy compared to tightly-coupled LiDAR inertial odometry via smoothing and mapping(LIO-SAM)and tightly-coupled LiDAR-visual-inertial odometry via smoothing and mapping(LVI-SAM), respectively. Indoor average drift decreased by 72.7% and 43.05% versus these benchmarks. The system significantly improved mapping precision and scene adaptability. The relocalization satisfied the engineering requirements for autonomous navigation of robot products.

Key words: robot, multi-sensor fusion, visual-inertial odometry, loop closure detection, relocalization

中图分类号: 

  • TP391
[1] YIN H S, LI S M, TAO Y, et al. Dynam-SLAM: an accurate, robust stereo visual-inertial SLAM method in dynamic environments[J]. IEEE Transactions on Robotics, 2023, 39(1): 289-308.
[2] YU Z L, ZHU L D, LU G Y. Tightly-coupled fusion of VINS and motion constraint for autonomous vehicle[J]. IEEE Transactions on Vehicular Technology, 2022, 71(6): 5799-5810.
[3] ZHONG X L, LI Y H, ZHU S Q, et al. LVIO-SAM: a multi-sensor fusion odometry via smoothing and mapping[C] //2021 IEEE International Conference on Robotics and Biomimetics(ROBIO). Sanya, China: IEEE, 2021: 440-445.
[4] CAMPOS C, ELVIRA R, RODRÍGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transac-tions on Robotics, 2021, 37(6): 1874-1890.
[5] SHAN T X, ENGLOT B, MEYERS D, et al. LIO-SAM: tightly-coupled LiDAR inertial odometry via smoothing and mapping[C] //2020 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). Las Vegas, USA: IEEE, 2020: 5135-5142.
[6] SHAN T X, ENGLOT B, RATTI C, et al. LVI-SAM: tightly-coupled LiDAR-visual-inertial odometry via smoo-thing and mapping[C] //2021 IEEE International Conference on Robotics and Automation(ICRA). Xi'an, China: IEEE, 2021: 5692-5698.
[7] LIN Y, GAO F, QIN T, et al. Autonomous aerial navigation using monocular visual-inertial fusion[J]. Journal of Field Robotics, 2018, 35(1): 23-51.
[8] HUANG J, ZHANG Y D, LI X. LiDAR-visual-inertial odometry using point and line features[C] //2022 4th International Conference on Robotics and Computer Vision(ICRCV). Wuhan, China: IEEE, 2022: 215-222.
[9] JIA Y X, NI Z K, NI X, et al. A multi-sensor fusion localization algorithm via dynamic target removal[C] //2023 15th International Conference on Intelligent Human-Machine Systems and Cybernetics(IHMSC). Hangzhou, China: IEEE, 2023: 138-142.
[10] LIU Z B, LI Z K, LIU A, et al. LVI-Fusion: a robust LiDAR-visual-inertial SLAM scheme[J]. Remote Sen-sing, 2024, 16(9): 1524.
[11] SEGAL A, HAEHNEL D, THRUN S. Generalized-ICP[C] //Robotics: Science and Systems. Seattle, USA: MIT, 2009: 435.
[12] LEPETIT V, MORENO-NOGUER F, FUA P. EPnP: an accurate O(n)solution to the PnP problem[J]. International Journal of Computer Vision, 2009, 81(2): 155-166.
[13] LV J J, XU J H, HU K W, et al. Targetless calibration of LiDAR-IMU system based on continuous-time batch estimation[C] //2020 IEEE/RSJ International Con-ference on Intelligent Robots and Systems(IROS). Las Vegas, USA: IEEE, 2020: 9968-9975.
[14] QUIGLEY M, CONLEY K, GERKEY B, et al. ROS: an open-source robot operating system[C] //ICRA Workshop on Open Source Software. Kobe, Japan: IEEE, 2009: 3-5.
[15] CHUM O, MATAS J, KITTLER J. Locally optimized RANSAC[C] //Joint Pattern Recognition Symposium. Heidelberg, Germany: Springer, 2003: 236-243.
[16] HELMBERGER M, MORIN K, BERNER B, et al. The Hilti SLAM challenge dataset[J]. IEEE Robotics and Automation Letters, 2022, 7(3): 7518-7525.
[1] 吕斌,刘淼,吴建清,张子毅,陈启香. 数字地图拼接技术综述[J]. 山东大学学报 (工学版), 2025, 55(3): 1-15.
[2] 张海森,张煌,王常顺. 基于多机器人编队控制的大件物品协同搬运[J]. 山东大学学报 (工学版), 2023, 53(4): 157-162.
[3] 张迪,徐德. 面向移动机器人的室外环境多层次地图构建[J]. 山东大学学报 (工学版), 2023, 53(2): 34-41.
[4] 刘斌,张萌. 用于腿足式机器人落地缓冲的复合控制策略[J]. 山东大学学报 (工学版), 2022, 52(4): 20-28.
[5] 吴建清,宋修广. 同步定位与建图技术发展综述[J]. 山东大学学报 (工学版), 2021, 51(5): 16-31.
[6] 梁启星,李彬,李志,张慧,荣学文,范永. 基于模型预测控制的四足机器人斜坡自适应调整算法与实现[J]. 山东大学学报 (工学版), 2021, 51(3): 37-44.
[7] 王薇,吴锋,周风余. 机器人操作技能自主认知与学习的研究现状与发展趋势[J]. 山东大学学报 (工学版), 2019, 49(6): 11-24.
[8] 赵洪华,赵建,段星光,胡志通,田倩倩,赵耀华. 颌骨重建手术多臂机器人构型设计与干涉分析[J]. 山东大学学报 (工学版), 2019, 49(6): 73-80.
[9] 李彩虹,方春,王志强,夏斌,王凤英. 基于超混沌同步控制的移动机器人全覆盖路径规划[J]. 山东大学学报 (工学版), 2019, 49(6): 63-72.
[10] 孔令龙,田国会. 智能家庭中一种基于本体的机器人服务认知机制[J]. 山东大学学报 (工学版), 2019, 49(6): 45-54.
[11] 刘美珍,周风余,李铭,王玉刚,陈科. 基于模型不确定补偿的轮式移动机器人反演复合控制[J]. 山东大学学报 (工学版), 2019, 49(6): 36-44.
[12] 尹磊, 周风余, 李铭, 王玉刚, 郭银博, 陈科. 基于微服务的服务机器人云服务设计方法[J]. 山东大学学报 (工学版), 2019, 49(6): 55-62.
[13] 吴禹均,吴巍,郭毓,郭健. 一种基于力觉的机器人对孔装配方法[J]. 山东大学学报 (工学版), 2019, 49(5): 119-126.
[14] 张冕,黄颖,梅海艺,郭毓. 基于Kinect的配电作业机器人智能人机交互方法[J]. 山东大学学报 (工学版), 2018, 48(5): 103-108.
[15] 辛亚先,李贻斌,李彬,荣学文. 四足机器人静-动步态平滑切换算法[J]. 山东大学学报(工学版), 2018, 48(4): 42-49.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!