Structerf-SLAM: Neural implicit representation SLAM for structural environments

Haocheng Wang, Yanlong Cao, Xiaoyao Wei, Yejun Shou, Lingfeng Shen, Zhijie Xu, Kai Ren

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

In recent years, research on simultaneous localization and mapping (SLAM) using neural implicit representation has shown promising outcomes due to its smooth mapping and low memory consumption, particularly suitable for structured environments with limited boundaries. However, there is currently no implicit SLAM that can effectively utilize prior structural constraints to accurately build 3D maps. In this study, we propose an RGB-D dense tracking and mapping approach, Structerf-SLAM, that combines visual odometry with neural implicit representation. Our scene representation consists of dual-layer feature grids and pre-trained decoders that decode the interpolated features into RGB and depth values. Moreover, structured planar constraints are integrated. In the tracking stage, utilizing the three-dimensional plane features under the Manhattan assumption achieves more stable and rapid data association, consequently resolving the tracking misalignment issue in textureless regions (e.g., floor, wall, etc.). In the mapping stage, by enforcing planar consistency, the depth predicted by the neural radiation field is well-fitted by a plane, resulting in smoother and more realistic map reconstruction. Experiments on synthetic and real scene datasets demonstrate competitive results of Structerf-SLAM in both mapping and tracking quality.
Original languageEnglish
Article number103893
Number of pages10
JournalComputers and Graphics
Volume119
Early online date27 Feb 2024
DOIs
Publication statusPublished - 1 Apr 2024

Cite this