Abstract
In dynamic indoor environments and for a Visual Simultaneous Localization and Mapping (vSLAM) system to operate, moving objects should be considered because they could affect the system’s visual odometer stability and it is position estimation accuracy. vSLAM can use feature points or a sequence of images as it is only source of input in order to perform localization while simultaneously creating a map of the environment. A vSLAM system based on ORB-SLAM3 and on YOLOR was proposed in this paper. The newly proposed system in combination with an object detection model (YOLOX) applied on extracted feature points is capable of achieving 2-4% better accuracy as compared to VPS-SLAM and DS-SLAM. Static feature points such as signs and benches were used to calculate the camera position and dynamic moving objects were eliminated by using the tracking thread. A specific custom personal dataset that includes indoor and outdoor RGB-D pictures of train stations including dynamic objects and high density of people, ground truth data, sequence data, video recording with the train stations and X, Y, Z data was used to validate and evaluate the proposed method. The results show that ORB-SLAM3 with YOLOR as object detection achieves 89.54% of accuracy in dynamic indoor environments compared to previous systems such as VPS-SLAM.
| Original language | English |
|---|---|
| Article number | 7553 |
| Journal | Sensors |
| Volume | 22 |
| Issue number | 19 |
| DOIs | |
| Publication status | Published - 5 Oct 2022 |
Keywords
- object detection
- SLAM
- visual SLAM
- simultaneous localization and mapping (SLAM)
- Algorithms
- Optic Flow
- Humans
- Video Recording
ASJC Scopus subject areas
- Analytical Chemistry
- Information Systems
- Atomic and Molecular Physics, and Optics
- Biochemistry
- Instrumentation
- Electrical and Electronic Engineering
Fingerprint
Dive into the research topics of 'Visual SLAM for dynamic environments based on object detection and optical flow for dynamic object removal'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver