OpenMPR: Recognize places using multimodal data for people with visual impairments

Ruiqi Cheng, Kaiwei Wang, Jian Bai, Zhijie Xu

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Place recognition plays a crucial role in navigational assistance, and is also a challenging issue in assistive technology. Place recognition is prone to erroneous localization owing to various changes between database and query images. Aiming at a wearable assistive device for visually impaired people, we propose an open-sourced place recognition algorithm, OpenMPR, which utilizes multimodal data to address the challenging issues of place recognition. Compared with conventional place recognition, the proposed OpenMPR not only leverages multiple effective descriptors, but also assigns different weights to those descriptors in image matching. Incorporating GNSS data into the algorithm, cone-based sequence searching is used for robust place recognition. The experiments illustrate that the proposed algorithm manages to solve the place recognition issue in real-world scenarios and surpasses state-of-the-art algorithms in terms of assistive navigation performance. On the real-world testing dataset, the online OpenMPR achieves 88.7% precision at 100% recall without illumination changes, and achieves 57.8% precision at 99.3% recall with illumination changes. The OpenMPR is available at https://github.com/chengricky/OpenMultiPR.

Original languageEnglish
Article number124004
Number of pages11
JournalMeasurement Science and Technology
Volume30
Issue number12
Early online date10 May 2019
DOIs
Publication statusPublished - 30 Sep 2019

Fingerprint

Dive into the research topics of 'OpenMPR: Recognize places using multimodal data for people with visual impairments'. Together they form a unique fingerprint.

Cite this