In the domain of social robotics, where robots dynamically interact with humans, the fusion of laser data with vision systems and Machine Learning (ML) techniques is crucial for navigation toward target people. Active perception not only reduces uncertainties but also enhances the sociability of robots by granting them a more profound comprehension of the social context in which they operate. In this thesis new active perception methods for social navigation in crowded environments are proposed that are based on 2D Light Detection and Ranging (LiDAR) in order to be used in public space by respecting privacy and without requiring additional sensors. The proposed methods consider a policy that measures and exploits the level of uncertainty in the people detection to choose the most useful poses for the robot to confirm the detection or to deny a previous wrong detection. The other two methods concern learning this policy based on a Deep Neural Network (DNN) and learning via Reinforcement Learning (RL) the most appropriate robot’s motions, to increase the certainty about the people in the environment. Results about the policy behavior show that this approach is feasible to be used on a real robot than the other two approaches that were tested only in simulations. Furthermore, the experiments at the IAS-Lab on the Take It And Go (TIAGo)++ revealed that the robot can adapt its behavior to the working scenario, in which it can accomplish its tasks successfully when deployed in challenging situations, that is the goal of active perception.

In the domain of social robotics, where robots dynamically interact with humans, the fusion of laser data with vision systems and Machine Learning (ML) techniques is crucial for navigation toward target people. Active perception not only reduces uncertainties but also enhances the sociability of robots by granting them a more profound comprehension of the social context in which they operate. In this thesis new active perception methods for social navigation in crowded environments are proposed that are based on 2D Light Detection and Ranging (LiDAR) in order to be used in public space by respecting privacy and without requiring additional sensors. The proposed methods consider a policy that measures and exploits the level of uncertainty in the people detection to choose the most useful poses for the robot to confirm the detection or to deny a previous wrong detection. The other two methods concern learning this policy based on a Deep Neural Network (DNN) and learning via Reinforcement Learning (RL) the most appropriate robot’s motions, to increase the certainty about the people in the environment. Results about the policy behavior show that this approach is feasible to be used on a real robot than the other two approaches that were tested only in simulations. Furthermore, the experiments at the IAS-Lab on the Take It And Go (TIAGo)++ revealed that the robot can adapt its behavior to the working scenario, in which it can accomplish its tasks successfully when deployed in challenging situations, that is the goal of active perception.

Active perception based on 2D LiDAR for social navigation in crowded environments

GUGLIELMIN, NICCOLÒ
2022/2023

Abstract

In the domain of social robotics, where robots dynamically interact with humans, the fusion of laser data with vision systems and Machine Learning (ML) techniques is crucial for navigation toward target people. Active perception not only reduces uncertainties but also enhances the sociability of robots by granting them a more profound comprehension of the social context in which they operate. In this thesis new active perception methods for social navigation in crowded environments are proposed that are based on 2D Light Detection and Ranging (LiDAR) in order to be used in public space by respecting privacy and without requiring additional sensors. The proposed methods consider a policy that measures and exploits the level of uncertainty in the people detection to choose the most useful poses for the robot to confirm the detection or to deny a previous wrong detection. The other two methods concern learning this policy based on a Deep Neural Network (DNN) and learning via Reinforcement Learning (RL) the most appropriate robot’s motions, to increase the certainty about the people in the environment. Results about the policy behavior show that this approach is feasible to be used on a real robot than the other two approaches that were tested only in simulations. Furthermore, the experiments at the IAS-Lab on the Take It And Go (TIAGo)++ revealed that the robot can adapt its behavior to the working scenario, in which it can accomplish its tasks successfully when deployed in challenging situations, that is the goal of active perception.
2022
Active perception based on 2D LiDAR for social navigation in crowded environments
In the domain of social robotics, where robots dynamically interact with humans, the fusion of laser data with vision systems and Machine Learning (ML) techniques is crucial for navigation toward target people. Active perception not only reduces uncertainties but also enhances the sociability of robots by granting them a more profound comprehension of the social context in which they operate. In this thesis new active perception methods for social navigation in crowded environments are proposed that are based on 2D Light Detection and Ranging (LiDAR) in order to be used in public space by respecting privacy and without requiring additional sensors. The proposed methods consider a policy that measures and exploits the level of uncertainty in the people detection to choose the most useful poses for the robot to confirm the detection or to deny a previous wrong detection. The other two methods concern learning this policy based on a Deep Neural Network (DNN) and learning via Reinforcement Learning (RL) the most appropriate robot’s motions, to increase the certainty about the people in the environment. Results about the policy behavior show that this approach is feasible to be used on a real robot than the other two approaches that were tested only in simulations. Furthermore, the experiments at the IAS-Lab on the Take It And Go (TIAGo)++ revealed that the robot can adapt its behavior to the working scenario, in which it can accomplish its tasks successfully when deployed in challenging situations, that is the goal of active perception.
Active perception
People detection
2D LiDAR analysis
Social navigation
Crowded environments
File in questo prodotto:
File Dimensione Formato  
Guglielmin_Niccolò.pdf

embargo fino al 29/11/2024

Dimensione 26.73 MB
Formato Adobe PDF
26.73 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/58727