Accurate and real-time satellite pose estimation is paramount for enabling complex proximity operations in space. This thesis presents the development and validation of a vision-based pipeline to retrieve the 6 degrees of freedom pose of a target satellite using monocular images from the SPEED dataset. The core of the proposed approach leverages YOLOv11n-pose, a deep learning model specifically trained for detecting 11 predefined keypoints on the target satellite structure within the input images. Subsequently, the extracted 2D keypoint coordinates, combined with their known 3D location in the satellite's body frame, are fed into a perspective-n-point (PnP) algorithm. Robustness against potential keypoint detection outliers is achieved by integrating the PnP solver with a RANSAC (Random Sample Consensus) scheme. The final stage of this work involves laboratory validation, where the entire pipeline is deployed and executed on a ZED Box platform to simulate computational constraints and evaluate the performance characteristics of the proposed system in an environment representative of an onboard flight computer.

Accurate and real-time satellite pose estimation is paramount for enabling complex proximity operations in space. This thesis presents the development and validation of a vision-based pipeline to retrieve the 6 degrees of freedom pose of a target satellite using monocular images from the SPEED dataset. The core of the proposed approach leverages YOLOv11n-pose, a deep learning model specifically trained for detecting 11 predefined keypoints on the target satellite structure within the input images. Subsequently, the extracted 2D keypoint coordinates, combined with their known 3D location in the satellite's body frame, are fed into a perspective-n-point (PnP) algorithm. Robustness against potential keypoint detection outliers is achieved by integrating the PnP solver with a RANSAC (Random Sample Consensus) scheme. The final stage of this work involves laboratory validation, where the entire pipeline is deployed and executed on a ZED Box platform to simulate computational constraints and evaluate the performance characteristics of the proposed system in an environment representative of an onboard flight computer.

Deep learning-driven vision-based relative pose estimation for satellite proximity operations

GRIGOLON, SIMONE
2024/2025

Abstract

Accurate and real-time satellite pose estimation is paramount for enabling complex proximity operations in space. This thesis presents the development and validation of a vision-based pipeline to retrieve the 6 degrees of freedom pose of a target satellite using monocular images from the SPEED dataset. The core of the proposed approach leverages YOLOv11n-pose, a deep learning model specifically trained for detecting 11 predefined keypoints on the target satellite structure within the input images. Subsequently, the extracted 2D keypoint coordinates, combined with their known 3D location in the satellite's body frame, are fed into a perspective-n-point (PnP) algorithm. Robustness against potential keypoint detection outliers is achieved by integrating the PnP solver with a RANSAC (Random Sample Consensus) scheme. The final stage of this work involves laboratory validation, where the entire pipeline is deployed and executed on a ZED Box platform to simulate computational constraints and evaluate the performance characteristics of the proposed system in an environment representative of an onboard flight computer.
2024
Deep learning-driven vision-based relative pose estimation for satellite proximity operations
Accurate and real-time satellite pose estimation is paramount for enabling complex proximity operations in space. This thesis presents the development and validation of a vision-based pipeline to retrieve the 6 degrees of freedom pose of a target satellite using monocular images from the SPEED dataset. The core of the proposed approach leverages YOLOv11n-pose, a deep learning model specifically trained for detecting 11 predefined keypoints on the target satellite structure within the input images. Subsequently, the extracted 2D keypoint coordinates, combined with their known 3D location in the satellite's body frame, are fed into a perspective-n-point (PnP) algorithm. Robustness against potential keypoint detection outliers is achieved by integrating the PnP solver with a RANSAC (Random Sample Consensus) scheme. The final stage of this work involves laboratory validation, where the entire pipeline is deployed and executed on a ZED Box platform to simulate computational constraints and evaluate the performance characteristics of the proposed system in an environment representative of an onboard flight computer.
Deep-learning
Pose estimation
Computer vision
Satellite navigation
File in questo prodotto:
File Dimensione Formato  
Grigolon_Simone.pdf

embargo fino al 07/07/2028

Dimensione 11.01 MB
Formato Adobe PDF
11.01 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/86990