Transfer Learning for UWB Error Correction and (N)LOS Classification in Multiple Environments

Jaron Fontaine, Fuhu Che, Adnan Shahid, Ben Van Herbruggen, Qasim Ahmed, Waqas Bin Abbas, Eli De Poorter

Research output: Contribution to journalArticlepeer-review


Ultra Wideband (UWB) is a popular technology to address the need for high precision indoor positioning systems in challenging industry 4.0 use cases. In line-of-sight (LOS) environments, UWB positioning errors in the order of 1-10 cm can be achieved. However, in non-line-of-sight (NLOS) conditions, this precision drops significantly, with errors typically >30 cm. Machine learning has been proposed to improve the precision in such NLOS conditions, but is typically environment-specific and lacks generalization to new environments and UWB configurations. As such, it is necessary to collect large datasets to train a neural network for each new environment or UWB configuration. To remedy this, this paper proposes automatic optimizations for transfer learning (TL) deep neural networks towards new environments and UWB configurations. We analyze error correction and (non)-line-of-sight ((N)LOS) classification models, using either feature-or channel impulse response-based (CIR) input data. Our TL solutions show a 50% error improvement and 15% (N)LOS classification accuracy improvement (for both feature-and CIR-based approaches) compared to a model trained in a different environment. We also analyze the impact on TL using a limited number of samples (25 to 400 samples). The highest accuracy is typically achieved by the CIR-based approach, where with only 50 samples from the new mixed (N)LOS environment, we show ±10 cm precision after error correction with 93% (N)LOS detection. The presented results demonstrate high precision UWB localization (from 643 mm to 245 mm) through ML with minimal data collection effort in challenging NLOS environments.

Original languageEnglish
Article number10195942
Number of pages16
JournalIEEE Internet of Things Journal
Early online date27 Jul 2023
Publication statusE-pub ahead of print - 27 Jul 2023

Cite this