This thesis addresses the problem of accurately determining the location of a receiver using a machine learning-based neural network classifier exploiting ranging signals from a Global Navigation Satellite System (GNSS), such as GPS or Galileo. Traditional GNSS methods, such as Least Squares Positioning (LSP), often focus on minimizing the difference between observed and predicted satellite signals to estimate locations. While these methods work well under ideal conditions, they frequently encounter challenges in environments with significant signal interference, obstructions, or multipath effects, which can degrade their performance. The proposed solution involves developing a neural network that takes as input the positions of satellites and ranging measurements (pseudoranges) to predict the region, or tile, in which a receiver is located. The concept of ‘tiles’ refers to dividing the geographical area into smaller, distinct regions, which simplifies the classification task and allows the model to focus on predicting these predefined zones. Key contributions of this work include the development of a multi-input neural network architecture, tailored preprocessing strategies to ensure data consistency, and integration techniques to handle GNSS data variability effectively. Performance analysis focuses on evaluating the accuracy, robustness, and computational efficiency of the model across various scenarios. While the proposed approach shows promise as a scalable alternative to conventional GNSS localization methods, future research is aimed at refining the model architecture and expanding the dataset to improve real-world applicability. This thesis presents a step toward a more adaptable and precise GNSS localization framework, bridging the gap between traditional techniques and modern machine learning advancements.

This thesis addresses the problem of accurately determining the location of a receiver using a machine learning-based neural network classifier exploiting ranging signals from a Global Navigation Satellite System (GNSS), such as GPS or Galileo. Traditional GNSS methods, such as Least Squares Positioning (LSP), often focus on minimizing the difference between observed and predicted satellite signals to estimate locations. While these methods work well under ideal conditions, they frequently encounter challenges in environments with significant signal interference, obstructions, or multipath effects, which can degrade their performance. The proposed solution involves developing a neural network that takes as input the positions of satellites and ranging measurements (pseudoranges) to predict the region, or tile, in which a receiver is located. The concept of ‘tiles’ refers to dividing the geographical area into smaller, distinct regions, which simplifies the classification task and allows the model to focus on predicting these predefined zones. Key contributions of this work include the development of a multi-input neural network architecture, tailored preprocessing strategies to ensure data consistency, and integration techniques to handle GNSS data variability effectively. Performance analysis focuses on evaluating the accuracy, robustness, and computational efficiency of the model across various scenarios. While the proposed approach shows promise as a scalable alternative to conventional GNSS localization methods, future research is aimed at refining the model architecture and expanding the dataset to improve real-world applicability. This thesis presents a step toward a more adaptable and precise GNSS localization framework, bridging the gap between traditional techniques and modern machine learning advancements.

GNSS localization by machine learning techniques

NIKBAKHSH JORSHARI, FARZAM
2023/2024

Abstract

This thesis addresses the problem of accurately determining the location of a receiver using a machine learning-based neural network classifier exploiting ranging signals from a Global Navigation Satellite System (GNSS), such as GPS or Galileo. Traditional GNSS methods, such as Least Squares Positioning (LSP), often focus on minimizing the difference between observed and predicted satellite signals to estimate locations. While these methods work well under ideal conditions, they frequently encounter challenges in environments with significant signal interference, obstructions, or multipath effects, which can degrade their performance. The proposed solution involves developing a neural network that takes as input the positions of satellites and ranging measurements (pseudoranges) to predict the region, or tile, in which a receiver is located. The concept of ‘tiles’ refers to dividing the geographical area into smaller, distinct regions, which simplifies the classification task and allows the model to focus on predicting these predefined zones. Key contributions of this work include the development of a multi-input neural network architecture, tailored preprocessing strategies to ensure data consistency, and integration techniques to handle GNSS data variability effectively. Performance analysis focuses on evaluating the accuracy, robustness, and computational efficiency of the model across various scenarios. While the proposed approach shows promise as a scalable alternative to conventional GNSS localization methods, future research is aimed at refining the model architecture and expanding the dataset to improve real-world applicability. This thesis presents a step toward a more adaptable and precise GNSS localization framework, bridging the gap between traditional techniques and modern machine learning advancements.
2023
GNSS localization by machine learning techniques
This thesis addresses the problem of accurately determining the location of a receiver using a machine learning-based neural network classifier exploiting ranging signals from a Global Navigation Satellite System (GNSS), such as GPS or Galileo. Traditional GNSS methods, such as Least Squares Positioning (LSP), often focus on minimizing the difference between observed and predicted satellite signals to estimate locations. While these methods work well under ideal conditions, they frequently encounter challenges in environments with significant signal interference, obstructions, or multipath effects, which can degrade their performance. The proposed solution involves developing a neural network that takes as input the positions of satellites and ranging measurements (pseudoranges) to predict the region, or tile, in which a receiver is located. The concept of ‘tiles’ refers to dividing the geographical area into smaller, distinct regions, which simplifies the classification task and allows the model to focus on predicting these predefined zones. Key contributions of this work include the development of a multi-input neural network architecture, tailored preprocessing strategies to ensure data consistency, and integration techniques to handle GNSS data variability effectively. Performance analysis focuses on evaluating the accuracy, robustness, and computational efficiency of the model across various scenarios. While the proposed approach shows promise as a scalable alternative to conventional GNSS localization methods, future research is aimed at refining the model architecture and expanding the dataset to improve real-world applicability. This thesis presents a step toward a more adaptable and precise GNSS localization framework, bridging the gap between traditional techniques and modern machine learning advancements.
Machine Learning
Neural Networks
Satellite Navigation
Positioning
Accuracy
File in questo prodotto:
File Dimensione Formato  
NikbakhshJorshari_Farzam.pdf

accesso aperto

Dimensione 6.05 MB
Formato Adobe PDF
6.05 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/76849