In recent years, there has been an increasing interest in proximity operations between satellites driven by various factors, including advancements in space technology, the increasing number of satellites in orbit, and the need to mitigate the risks posed by space debris. In addition, the application of Artificial Intelligence (AI) in the space domain has recently gained a prominent interest. Among the Machine Learning (ML) techniques available, Convolutional Neural Networks (CCNs) represent a suitable deep learning method to be applied in the GNC systems for close-proximity operations. This work proposes a pipeline for relative navigation based on computer vision algorithms with the aim of computing the measurement vector used by a subsequent Extended Kalman Filter (EKF) to predict the relative motion between a chaser satellite hosting a stereo camera and an uncooperative target satellite. The pipeline utilizes the state-of-the-art CNN called You Only Look Once 7th version (YOLOv7) and the feature detector Oriented FAST and Rotated BRIEF (ORB). YOLOv7 outperforms other CNNs in terms of speed and accuracy and is used for both object detection and segmentation tasks. The network is crucial to reduce the search field of relevant features of the target, speeding up the computing time of the pipeline for real-time implementation. The first key aspect of this work is the training of the CNN YOLOv7-tiny with both datasets available on the web (pretraining with SPEED and/or COCO) and two datasets, one for object detection and the other for segmentation, realized in laboratory using a representative facility and data augmentation methods. The relative navigation pipeline embedding the trained networks was then tested in a representative laboratory environment using a two-units cubesat target mock-up and a free-floating chaser mock-up that hosts the stereo-camera ZED 2i and a mini-PC powered by an NVIDIA Jetson Xavier board. The computing times and the performances of each navigation pipeline step were evaluated for different combinations of stereo camera FPS, image resolutions, lighting conditions, computing unit (CPU and GPU) and relative motion between the two satellite mock-ups.

In recent years, there has been an increasing interest in proximity operations between satellites driven by various factors, including advancements in space technology, the increasing number of satellites in orbit, and the need to mitigate the risks posed by space debris. In addition, the application of Artificial Intelligence (AI) in the space domain has recently gained a prominent interest. Among the Machine Learning (ML) techniques available, Convolutional Neural Networks (CCNs) represent a suitable deep learning method to be applied in the GNC systems for close-proximity operations. This work proposes a pipeline for relative navigation based on computer vision algorithms with the aim of computing the measurement vector used by a subsequent Extended Kalman Filter (EKF) to predict the relative motion between a chaser satellite hosting a stereo camera and an uncooperative target satellite. The pipeline utilizes the state-of-the-art CNN called You Only Look Once 7th version (YOLOv7) and the feature detector Oriented FAST and Rotated BRIEF (ORB). YOLOv7 outperforms other CNNs in terms of speed and accuracy and is used for both object detection and segmentation tasks. The network is crucial to reduce the search field of relevant features of the target, speeding up the computing time of the pipeline for real-time implementation. The first key aspect of this work is the training of the CNN YOLOv7-tiny with both datasets available on the web (pretraining with SPEED and/or COCO) and two datasets, one for object detection and the other for segmentation, realized in laboratory using a representative facility and data augmentation methods. The relative navigation pipeline embedding the trained networks was then tested in a representative laboratory environment using a two-units cubesat target mock-up and a free-floating chaser mock-up that hosts the stereo-camera ZED 2i and a mini-PC powered by an NVIDIA Jetson Xavier board. The computing times and the performances of each navigation pipeline step were evaluated for different combinations of stereo camera FPS, image resolutions, lighting conditions, computing unit (CPU and GPU) and relative motion between the two satellite mock-ups.

VISION-BASED PROXIMITY NAVIGATION BETWEEN SATELLITES USING CONVOLUTIONAL NEURAL NETWORKS

FAVARETTO, FABIO
2022/2023

Abstract

In recent years, there has been an increasing interest in proximity operations between satellites driven by various factors, including advancements in space technology, the increasing number of satellites in orbit, and the need to mitigate the risks posed by space debris. In addition, the application of Artificial Intelligence (AI) in the space domain has recently gained a prominent interest. Among the Machine Learning (ML) techniques available, Convolutional Neural Networks (CCNs) represent a suitable deep learning method to be applied in the GNC systems for close-proximity operations. This work proposes a pipeline for relative navigation based on computer vision algorithms with the aim of computing the measurement vector used by a subsequent Extended Kalman Filter (EKF) to predict the relative motion between a chaser satellite hosting a stereo camera and an uncooperative target satellite. The pipeline utilizes the state-of-the-art CNN called You Only Look Once 7th version (YOLOv7) and the feature detector Oriented FAST and Rotated BRIEF (ORB). YOLOv7 outperforms other CNNs in terms of speed and accuracy and is used for both object detection and segmentation tasks. The network is crucial to reduce the search field of relevant features of the target, speeding up the computing time of the pipeline for real-time implementation. The first key aspect of this work is the training of the CNN YOLOv7-tiny with both datasets available on the web (pretraining with SPEED and/or COCO) and two datasets, one for object detection and the other for segmentation, realized in laboratory using a representative facility and data augmentation methods. The relative navigation pipeline embedding the trained networks was then tested in a representative laboratory environment using a two-units cubesat target mock-up and a free-floating chaser mock-up that hosts the stereo-camera ZED 2i and a mini-PC powered by an NVIDIA Jetson Xavier board. The computing times and the performances of each navigation pipeline step were evaluated for different combinations of stereo camera FPS, image resolutions, lighting conditions, computing unit (CPU and GPU) and relative motion between the two satellite mock-ups.
2022
VISION-BASED PROXIMITY NAVIGATION BETWEEN SATELLITES USING CONVOLUTIONAL NEURAL NETWORKS
In recent years, there has been an increasing interest in proximity operations between satellites driven by various factors, including advancements in space technology, the increasing number of satellites in orbit, and the need to mitigate the risks posed by space debris. In addition, the application of Artificial Intelligence (AI) in the space domain has recently gained a prominent interest. Among the Machine Learning (ML) techniques available, Convolutional Neural Networks (CCNs) represent a suitable deep learning method to be applied in the GNC systems for close-proximity operations. This work proposes a pipeline for relative navigation based on computer vision algorithms with the aim of computing the measurement vector used by a subsequent Extended Kalman Filter (EKF) to predict the relative motion between a chaser satellite hosting a stereo camera and an uncooperative target satellite. The pipeline utilizes the state-of-the-art CNN called You Only Look Once 7th version (YOLOv7) and the feature detector Oriented FAST and Rotated BRIEF (ORB). YOLOv7 outperforms other CNNs in terms of speed and accuracy and is used for both object detection and segmentation tasks. The network is crucial to reduce the search field of relevant features of the target, speeding up the computing time of the pipeline for real-time implementation. The first key aspect of this work is the training of the CNN YOLOv7-tiny with both datasets available on the web (pretraining with SPEED and/or COCO) and two datasets, one for object detection and the other for segmentation, realized in laboratory using a representative facility and data augmentation methods. The relative navigation pipeline embedding the trained networks was then tested in a representative laboratory environment using a two-units cubesat target mock-up and a free-floating chaser mock-up that hosts the stereo-camera ZED 2i and a mini-PC powered by an NVIDIA Jetson Xavier board. The computing times and the performances of each navigation pipeline step were evaluated for different combinations of stereo camera FPS, image resolutions, lighting conditions, computing unit (CPU and GPU) and relative motion between the two satellite mock-ups.
vision-based
convolution
neural networks
satellite
relative navigation
File in questo prodotto:
File Dimensione Formato  
Favaretto_Fabio.pdf

embargo fino al 06/03/2026

Dimensione 30.47 MB
Formato Adobe PDF
30.47 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/43385