**Title:**Multi-sensor robotic navigation using teacher-student framework **Author:** Amir Mahdi Amani **Institution:** Humanoid Robots Lab, University of Bonn, Germany **Abstract:** Here is the plain text version of the abstract without the LaTeX formatting: Traditional robotic navigation systems often rely on a single sensory input like LiDAR, which has limitations such as mechanical constraints, poor object recognition, and reduced effectiveness under adverse weather conditions. To address these shortcomings, this thesis introduces a novel approach that ensures continuous navigation by leveraging vision data when LiDAR is insufficient. We employ a teacher-student framework involving two teacher models—each trained on LiDAR data using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and designed to exhibit different navigation behaviors—to navigate while avoiding obstacles. A single student model learns to replicate the diverse behaviors of both teachers through supervised learning, utilizing LiDAR or vision data collected from the robots. This results in a generalized navigation model applicable to both the Pioneer 3dx and the TurtleBot3 Waffle, eliminating the need for separate models for each platform. The student model effectively emulates the varied navigation strategies of the teacher models, allowing the robotic system to switch to vision-based navigation seamlessly when LiDAR limitations impede performance. This approach enhances flexibility and robustness across diverse operational settings, paving the way for more reliable and versatile robotic navigation in real-world applications even when LiDAR sensor modality is compromised.
Here is the plain text version of the abstract without the LaTeX formatting: Traditional robotic navigation systems often rely on a single sensory input like LiDAR, which has limitations such as mechanical constraints, poor object recognition, and reduced effectiveness under adverse weather conditions. To address these shortcomings, this thesis introduces a novel approach that ensures continuous navigation by leveraging vision data when LiDAR is insufficient. We employ a teacher-student framework involving two teacher models—each trained on LiDAR data using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and designed to exhibit different navigation behaviors—to navigate while avoiding obstacles. A single student model learns to replicate the diverse behaviors of both teachers through supervised learning, utilizing LiDAR or vision data collected from the robots. This results in a generalized navigation model applicable to both the Pioneer 3dx and the TurtleBot3 Waffle, eliminating the need for separate models for each platform. The student model effectively emulates the varied navigation strategies of the teacher models, allowing the robotic system to switch to vision-based navigation seamlessly when LiDAR limitations impede performance. This approach enhances flexibility and robustness across diverse operational settings, paving the way for more reliable and versatile robotic navigation in real-world applications even when LiDAR sensor modality is compromised.
Multi-Sensor Robotic Navigation Using Distillation Policy
AMANI, AMIR MAHDI
2023/2024
Abstract
**Title:**Multi-sensor robotic navigation using teacher-student framework **Author:** Amir Mahdi Amani **Institution:** Humanoid Robots Lab, University of Bonn, Germany **Abstract:** Here is the plain text version of the abstract without the LaTeX formatting: Traditional robotic navigation systems often rely on a single sensory input like LiDAR, which has limitations such as mechanical constraints, poor object recognition, and reduced effectiveness under adverse weather conditions. To address these shortcomings, this thesis introduces a novel approach that ensures continuous navigation by leveraging vision data when LiDAR is insufficient. We employ a teacher-student framework involving two teacher models—each trained on LiDAR data using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and designed to exhibit different navigation behaviors—to navigate while avoiding obstacles. A single student model learns to replicate the diverse behaviors of both teachers through supervised learning, utilizing LiDAR or vision data collected from the robots. This results in a generalized navigation model applicable to both the Pioneer 3dx and the TurtleBot3 Waffle, eliminating the need for separate models for each platform. The student model effectively emulates the varied navigation strategies of the teacher models, allowing the robotic system to switch to vision-based navigation seamlessly when LiDAR limitations impede performance. This approach enhances flexibility and robustness across diverse operational settings, paving the way for more reliable and versatile robotic navigation in real-world applications even when LiDAR sensor modality is compromised.File | Dimensione | Formato | |
---|---|---|---|
Amani_Amir Mahdi.pdf
accesso riservato
Dimensione
1.22 MB
Formato
Adobe PDF
|
1.22 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/74949