ImMesh: An Immediate LiDAR Localization and Meshing Framework

Abstract

In this paper, we propose a novel LiDAR(-inertial) odometry and mapping framework to achieve the goal of simultaneous localization and meshing in real-time. This proposed framework termed ImMesh comprises four tightly-coupled modules: receiver, localization, meshing, and broadcaster. The localization module utilizes the prepossessed sensor data from the receiver, estimates the sensor pose online by registering LiDAR scans to maps, and dynamically grows the map. Then, our meshing module takes the registered LiDAR scan for incrementally reconstructing the triangle mesh on the fly. Finally, the real-time odometry, map, and mesh are published via our broadcaster. The key contribution of this work is the meshing module, which represents a scene by an efficient hierarchical voxels structure, performs fast finding of voxels observed by new scans, and reconstructs triangle facets in each voxel in an incremental manner. This voxel-wise meshing operation is delicately designed for the purpose of efficiency; it first performs a dimension reduction by projecting 3D points to a 2D local plane contained in the voxel, and then executes the meshing operation with pull, commit and push steps for incremental reconstruction of triangle facets. To the best of our knowledge, this is the first work in literature that can reconstruct online the triangle mesh of large-scale scenes, just relying on a standard CPU without GPU acceleration. To share our findings and make contributions to the community, we make our code publicly available on our GitHub: https://github.com/hku-mars/ImMesh

Publication
Under review

ImMesh: An Immediate LiDAR Localization and Meshing Framework

1. Introduction

ImMesh is a novel LiDAR(-inertial) odometry and meshing framework, which takes advantage of input of LiDAR data, achieving the goal of simultaneous localization and meshing in real-time. ImMesh comprises four tightly-coupled modules: receiver, localization, meshing, and broadcaster. The localization module utilizes the prepossessed sensor data from the receiver, estimates the sensor pose online by registering LiDAR scans to maps, and dynamically grows the map. Then, our meshing module takes the registered LiDAR scan for incrementally reconstructing the triangle mesh on the fly. Finally, the real-time odometry, map, and mesh are published via our broadcaster.

video

1.2 Our accompanying videos

Our accompanying videos are now available on YouTube (click below images to open) and Bilibili1, 2, 3.

video video video video

2. What can ImMesh do?

2.1 Simultaneous LiDAR localization and mesh reconstruction on the fly

video video

2.2 ImMesh for LiDAR point cloud reinforement

video video

2.3 ImMesh for rapid, lossless texture reconstruction

video

3. Contact us

If you have any questions about this work, please feel free to contact me <ziv.lin.ljrATgmail.com> and Dr. Fu Zhang <fuzhangAThku.hk> via email.

Jiarong Lin
Jiarong Lin
Ph.D. candidate in Robotics🤖

My research interests include Simultaneous localization and mapping (SLAM), Multi-sensor (i.e., LiDAR-Inertial-Visual) Fusion, and 3D reconstruction. My popular works include: R3LIVE, FAST-LIO, loam-livox, R2LIVE, and ImMesh🆕.