Transformation&Translation Occupancy Grid Mapping: 2-Dimensional Deep Learning Refined SLAM

๐Ÿ“… 2025-04-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the significant noise, structural distortion, and low-quality floorplan generation in occupancy grid maps (OGMs) caused by odometry drift and pose estimation errors in 2D LiDAR SLAM within large, complex environments, this paper proposes a novel OGM paradigm integrating geometric transformation modeling with translation-invariance constraints. It pioneers the adaptation of robust 3D SLAM pose estimation principles to the 2D domain and introduces a deep reinforcement learningโ€“based synthetic data generation strategy to support end-to-end GAN training for joint correction of SLAM errors. The method achieves real-time, high-accuracy mapping in real-world campus environments, markedly improving map clarity and geometric fidelity. Extensive evaluation across diverse large-scale, complex scenes demonstrates superior generalizability: the proposed approach consistently outperforms state-of-the-art 2D SLAM methods in map quality, localization accuracy, and system robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
SLAM (Simultaneous Localisation and Mapping) is a crucial component for robotic systems, providing a map of an environment, the current location and previous trajectory of a robot. While 3D LiDAR SLAM has received notable improvements in recent years, 2D SLAM lags behind. Gradual drifts in odometry and pose estimation inaccuracies hinder modern 2D LiDAR-odometry algorithms in large complex environments. Dynamic robotic motion coupled with inherent estimation based SLAM processes introduce noise and errors, degrading map quality. Occupancy Grid Mapping (OGM) produces results that are often noisy and unclear. This is due to the fact that evidence based mapping represents maps according to uncertain observations. This is why OGMs are so popular in exploration or navigation tasks. However, this also limits OGMs' effectiveness for specific mapping based tasks such as floor plan creation in complex scenes. To address this, we propose our novel Transformation and Translation Occupancy Grid Mapping (TT-OGM). We adapt and enable accurate and robust pose estimation techniques from 3D SLAM to the world of 2D and mitigate errors to improve map quality using Generative Adversarial Networks (GANs). We introduce a novel data generation method via deep reinforcement learning (DRL) to build datasets large enough for training a GAN for SLAM error correction. We demonstrate our SLAM in real-time on data collected at Loughborough University. We also prove its generalisability on a variety of large complex environments on a collection of large scale well-known 2D occupancy maps. Our novel approach enables the creation of high quality OGMs in complex scenes, far surpassing the capabilities of current SLAM algorithms in terms of quality, accuracy and reliability.
Problem

Research questions and friction points this paper is trying to address.

Improving 2D SLAM accuracy in large complex environments
Reducing noise and errors in Occupancy Grid Mapping
Enhancing map quality for complex scene tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapt 3D SLAM pose estimation to 2D
Use GANs for SLAM error correction
Generate datasets via deep reinforcement learning
๐Ÿ”Ž Similar Papers
No similar papers found.
L
Leon Davies
Department of Computer Science, Loughborough University, Epinal Way, Loughborough, LE11 3TU, Leicestershire, United Kingdom
B
Baihua Li
Department of Computer Science, Loughborough University, Epinal Way, Loughborough, LE11 3TU, Leicestershire, United Kingdom
M
Mohamad Saada
Department of Computer Science, Loughborough University, Epinal Way, Loughborough, LE11 3TU, Leicestershire, United Kingdom
S
Simon Solvsten
European Center for Risk & Resilience Studies, University of Southern Denmark, Degnevej 14, Esbjerg, 6705, Esbjerg, Denmark
Qinggang Meng
Qinggang Meng
Department of Computer Science, Loughborough University, UK
roboticsdevelopmental roboticsmulti-UAV/UGV cooperationcomputer visionpattern recognition