Skip to content

Latest commit

 

History

History
230 lines (151 loc) · 11.2 KB

README.md

File metadata and controls

230 lines (151 loc) · 11.2 KB

MapEval: Towards Unified, Robust and Efficient SLAM Map Evaluation Framework

Xiangcheng Hu1 · Jin Wu1 · Mingkai Jia1· Hongyu Yan1· Yi Jiang2· Binqian Jiang1
Wei Zhang1 · Wei He3 · Ping Tan1*†

1HKUST   2CityU   3USTB

†project lead *corresponding author

Paper PDFPRs-WelcomeGitHub Stars FORK GitHub IssuesLicense

image-20241127080805642

MapEval is a comprehensive framework for evaluating point cloud maps in SLAM systems, addressing two fundamentally distinct aspects of map quality assessment:

  1. Global Geometric Accuracy: Measures the absolute geometric fidelity of the reconstructed map compared to ground truth. This aspect is crucial as SLAM systems often accumulate drift over long trajectories, leading to global deformation.
  2. Local Structural Consistency: Evaluates the preservation of local geometric features and structural relationships, which is essential for tasks like obstacle avoidance and local planning, even when global accuracy may be compromised.

These complementary aspects require different evaluation approaches, as global drift may exist despite excellent local reconstruction, or conversely, good global alignment might mask local inconsistencies. Our framework provides a unified solution through both traditional metrics and novel evaluation methods based on optimal transport theory.

News

  • 2025/02/12: Codes released!
  • 2025/02/05: Resubmit.
  • 2024/12/19: Submitted to IEEE RAL. When the paper accepted, the new version of codes will release!

Key Features

Traditional Metrics Implementation:

  • Accuracy (AC): Point-level geometric error assessment
  • Completeness (COM): Map coverage evaluation.
  • Chamfer Distance (CD): Bidirectional point cloud difference
  • Mean Map Entropy (MME): Information-theoretic local consistency metric

Novel Proposed Metrics:

  • Average Wasserstein Distance (AWD): Robust global geometric accuracy assessment
  • Spatial Consistency Score (SCS): Enhanced local consistency evaluation

image-20241129091604653

Results

Simulated experiments

Noise Sensitivity Outlier Robustness
image-20241129123446075 image-20241129091845196

image-20241127083707943

Real-world experiments

Map Evaluation via Localization Accuracy Map Evaluation in Diverse Environments
image-20241127083813797 image-20241127083801691
image-20241129092052634

Computational Efficiency

image-20241129091927786

Parameter Sensitivity Analysis

image-20241127084154114

Datasets

MS-dataset FusionPortable (FP) and FusionPortableV2 dataset Newer College (NC) GEODE dataset (GE)

image-20241129091746334

Quickly Run

Dependencies

Test Data(password: 1)

sequence Test PCD GT PCD
MCR_slow map.pcd map_gt.pcd

Usage

  1. install open3d. (maybe a higer version of CMake is needed)
git clone https://github.com/isl-org/Open3D.git
cd Open3D && mkdir build && cd build   
cmake ..
make install
  1. install cloud_map_eval
git clone https://github.com/JokerJohn/Cloud_Map_Evaluation.git
cd Cloud_Map_Evaluation/map_eval && mkdir build
cmake ..
make
./map_eval
  1. set and read the instruction of some params in config.yaml.
# accuracy_level, vector5d, we mainly use the result of the first element
# if inlier is very small, we can try to larger the value, e.g. for outdoors, [0.5, 0.3, 0.2, 0.1, 0.05]
accuracy_level: [0.2, 0.1, 0.08, 0.05, 0.01]

# initial_matrix, vector16d, the initial matrix of the registration
# make sure the format is correct, or you will got the error log: YAML::BadSubscript' what():  operator[] call on a scalar
initial_matrix:
  - [1.0, 0.0, 0.0, 0.0]
  - [0.0, 1.0, 0.0, 0.0]
  - [0.0, 0.0, 1.0, 0.0]
  - [0.0, 0.0, 0.0, 1.0]
  
# vmd voxel size, outdoor: 2.0-4.0; indoor: 2.0-3.0
vmd_voxel_size: 3.0
  1. get the final results

we have a point cloud map generated by a pose-slam system, and we have a ground truth point cloud map. Then we caculate related metrics.

image-20250214100110872

We can also get a rendered raw distance-error map(10cm) and inlier distance-error map(2cm) in this process, the color R->G->B represent for the distance error at a level of 0-10cm.

image (4)

if we do not have gt map, we can evaluate the Mean Map Entropy (MME), smaller means better consistency.

image (5)

we can also get a simpe mesh reconstructed from point cloud map.

image-20230101200651976

  1. we got the result flies.

image-20250212202446474

  1. if you want to get the visulization of voxel errors, use the error-visualization.py

    pip install numpy matplotlib scipy
    
    python3 error-visualization.py
    image-20250212202920950 image-20250212202933255 image-20250212203009074 image-20250212203025149

Issues

How do you get your initial pose?

we can use CloudCompare to align LIO map to Gt map .

  • Roughly translate and rotate the LIO point cloud map to the GT map。

  • Manually register the moved LIO map (aligned) to the GT map (reference), and get the output of the terminal transfrom T2, then the initial pose matrix is the terminal output transform T.

image-20230106135937336

image-20230106140017020

What's the difference between raw rendered map and inlier rendered map?

The primary function of the raw rendered map (left) is to color-code the error of all points in the map estimated by the algorithm. For each point in the estimated map that does not find a corresponding point in the ground truth (gt) map, it is defaulted to the maximum error (20cm), represented as red. On the other hand, the inlier rendered map (right) excludes the non-overlapping regions of the point cloud and colors only the error of the inlier points after point cloud matching. This map therefore contains only a portion of the points from the original estimated map.

image (6)

Publications

We kindly recommend to cite our paper if you find this library useful:

@misc{hu2024mapeval,
      title={MapEval: Towards Unified, Robust and Efficient SLAM Map Evaluation Framework}, 
      author={Xiangcheng Hu and Jin Wu and Mingkai Jia and Hongyu Yan and Yi Jiang and Binqian Jiang and Wei Zhang and Wei He and Ping Tan},
      year={2024},
      eprint={2411.17928},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2411.17928}, 
}

@ARTICLE{hu2024paloc,
  author={Hu, Xiangcheng and Zheng, Linwei and Wu, Jin and Geng, Ruoyu and Yu, Yang and Wei, Hexiang and Tang, Xiaoyu and Wang, Lujia and Jiao, Jianhao and Liu, Ming},
  journal={IEEE/ASME Transactions on Mechatronics}, 
  title={PALoc: Advancing SLAM Benchmarking With Prior-Assisted 6-DoF Trajectory Generation and Uncertainty Estimation}, 
  year={2024},
  volume={29},
  number={6},
  pages={4297-4308},
  doi={10.1109/TMECH.2024.3362902}}

Contributors