Human motion trajectory prediction plays a crucial role in enabling robots to navigate and interact safely and efficiently in crowded environments: proactive decision-making, obstacle avoidance, and natural human-robot interaction benefit greatly from the ability to adapt the robot's motion to the future. However, human motion trajectory prediction in social navigation poses significant challenges due to the complex nature of human behavior and the dynamic nature of social interactions. Many approaches focus on understanding how humans move in the world by learning how they act on the map from a bird-eye camera placed on a tall structure. Therefore, adapting these methods based on the onboard sensors of the robots is not straightforward: it is necessary that they possess a map of the area in which they are working and that they update it with the humans they detect using their sensors. Only after that, predictions can be made. Given the dynamics of crowded environments with both humans and obstacles moving often and at different speeds, the computational complexity due to the introduction and the continuous update of these maps could affect the robot's performance and reactivity. Moreover, people's behavior changes dramatically in relation to the context in which they are moving. To face these challenges, we propose a method for predicting human motion trajectories using only the robot's onboard sensors, namely a 2D lidar and an RGB-D camera, and based on context and on deep learning techniques trained on a state-of-the-art dataset, JackRabbot. The method employs a Long Short Term Memory model for learning trajectories, while the network learns in parallel from the data extracted from the context of the environment, using unsupervised learning. The method is then tested on popular social navigation datasets, ATC, ETH and UCY. Results show that this approach is slightly better when compared to a similar model based only on trajectory-learning. Finally, the model is tested in real life on the TIAGo++ robot situated at the Autonomous Robotics Laboratory of the University of Padova.

Human motion trajectory prediction plays a crucial role in enabling robots to navigate and interact safely and efficiently in crowded environments: proactive decision-making, obstacle avoidance, and natural human-robot interaction benefit greatly from the ability to adapt the robot's motion to the future. However, human motion trajectory prediction in social navigation poses significant challenges due to the complex nature of human behavior and the dynamic nature of social interactions. Many approaches focus on understanding how humans move in the world by learning how they act on the map from a bird-eye camera placed on a tall structure. Therefore, adapting these methods based on the onboard sensors of the robots is not straightforward: it is necessary that they possess a map of the area in which they are working and that they update it with the humans they detect using their sensors. Only after that, predictions can be made. Given the dynamics of crowded environments with both humans and obstacles moving often and at different speeds, the computational complexity due to the introduction and the continuous update of these maps could affect the robot's performance and reactivity. Moreover, people's behavior changes dramatically in relation to the context in which they are moving. To face these challenges, we propose a method for predicting human motion trajectories using only the robot's onboard sensors, namely a 2D lidar and an RGB-D camera, and based on context and on deep learning techniques trained on a state-of-the-art dataset, JackRabbot. The method employs a Long Short Term Memory model for learning trajectories, while the network learns in parallel from the data extracted from the context of the environment, using unsupervised learning. The method is then tested on popular social navigation datasets, ATC, ETH and UCY. Results show that this approach is slightly better when compared to a similar model based only on trajectory-learning. Finally, the model is tested in real life on the TIAGo++ robot situated at the Autonomous Robotics Laboratory of the University of Padova.

People motion prediction for social navigation in crowded environments via context-based learning

SAVIO, ANDREA
2022/2023

Abstract

Human motion trajectory prediction plays a crucial role in enabling robots to navigate and interact safely and efficiently in crowded environments: proactive decision-making, obstacle avoidance, and natural human-robot interaction benefit greatly from the ability to adapt the robot's motion to the future. However, human motion trajectory prediction in social navigation poses significant challenges due to the complex nature of human behavior and the dynamic nature of social interactions. Many approaches focus on understanding how humans move in the world by learning how they act on the map from a bird-eye camera placed on a tall structure. Therefore, adapting these methods based on the onboard sensors of the robots is not straightforward: it is necessary that they possess a map of the area in which they are working and that they update it with the humans they detect using their sensors. Only after that, predictions can be made. Given the dynamics of crowded environments with both humans and obstacles moving often and at different speeds, the computational complexity due to the introduction and the continuous update of these maps could affect the robot's performance and reactivity. Moreover, people's behavior changes dramatically in relation to the context in which they are moving. To face these challenges, we propose a method for predicting human motion trajectories using only the robot's onboard sensors, namely a 2D lidar and an RGB-D camera, and based on context and on deep learning techniques trained on a state-of-the-art dataset, JackRabbot. The method employs a Long Short Term Memory model for learning trajectories, while the network learns in parallel from the data extracted from the context of the environment, using unsupervised learning. The method is then tested on popular social navigation datasets, ATC, ETH and UCY. Results show that this approach is slightly better when compared to a similar model based only on trajectory-learning. Finally, the model is tested in real life on the TIAGo++ robot situated at the Autonomous Robotics Laboratory of the University of Padova.
2022
People motion prediction for social navigation in crowded environments via context-based learning
Human motion trajectory prediction plays a crucial role in enabling robots to navigate and interact safely and efficiently in crowded environments: proactive decision-making, obstacle avoidance, and natural human-robot interaction benefit greatly from the ability to adapt the robot's motion to the future. However, human motion trajectory prediction in social navigation poses significant challenges due to the complex nature of human behavior and the dynamic nature of social interactions. Many approaches focus on understanding how humans move in the world by learning how they act on the map from a bird-eye camera placed on a tall structure. Therefore, adapting these methods based on the onboard sensors of the robots is not straightforward: it is necessary that they possess a map of the area in which they are working and that they update it with the humans they detect using their sensors. Only after that, predictions can be made. Given the dynamics of crowded environments with both humans and obstacles moving often and at different speeds, the computational complexity due to the introduction and the continuous update of these maps could affect the robot's performance and reactivity. Moreover, people's behavior changes dramatically in relation to the context in which they are moving. To face these challenges, we propose a method for predicting human motion trajectories using only the robot's onboard sensors, namely a 2D lidar and an RGB-D camera, and based on context and on deep learning techniques trained on a state-of-the-art dataset, JackRabbot. The method employs a Long Short Term Memory model for learning trajectories, while the network learns in parallel from the data extracted from the context of the environment, using unsupervised learning. The method is then tested on popular social navigation datasets, ATC, ETH and UCY. Results show that this approach is slightly better when compared to a similar model based only on trajectory-learning. Finally, the model is tested in real life on the TIAGo++ robot situated at the Autonomous Robotics Laboratory of the University of Padova.
Autonomous robotics
Social robotics
Social navigation
Motion prediction
Deep learning
File in questo prodotto:
File Dimensione Formato  
Savio_Andrea.pdf

embargo fino al 11/10/2024

Dimensione 20.1 MB
Formato Adobe PDF
20.1 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/54147