Place recognition plays a crucial role in navigational assistance, and is also a challenging issue in assistive technology. Place recognition is prone to erroneous localization owing to various changes between database and query images. Aiming at a wearable assistive device for visually impaired people, we propose an open-sourced place recognition algorithm, OpenMPR, which utilizes multimodal data to address the challenging issues of place recognition. Compared with conventional place recognition, the proposed OpenMPR not only leverages multiple effective descriptors, but also assigns different weights to those descriptors in image matching. Incorporating GNSS data into the algorithm, cone-based sequence searching is used for robust place recognition. The experiments illustrate that the proposed algorithm manages to solve the place recognition issue in real-world scenarios and surpasses state-of-the-art algorithms in terms of assistive navigation performance. On the real-world testing dataset, the online OpenMPR achieves 88.7% precision at 100% recall without illumination changes, and achieves 57.8% precision at 99.3% recall with illumination changes. The OpenMPR is available at https://github.com/chengricky/OpenMultiPR.