This thesis investigates the use of reinforcement learning for autonomous driving in the Duckietown environment, with a particular focus on the problem of intersection handling. Starting from a PPO agent capable of correctly navigate through a simple map, the work progressively introduces structured mechanisms that allow the policy to approach and cross intersections in an environment in which another agent is present and relevant. A key component of this progression is the integration of information from this autonomous vehicle, which enables the agent to adjust its velocity based on relative positioning, the approaching trajectory, and estimated temporal advantage with respect to the conflict point. The proposed method is evaluated against two meaningful baselines: first, a standard lane-following controller and then a deterministic rule-based agent that stops whenever the other vehicle is closer to the crossing point or is already approaching the intersection. In this context, experimental analyses show that the learned policy is able to exhibit smoother and more anticipatory deceleration, improving stability in straight segments, and a more flexible and realistic behavior compared to the deterministic controller, while still maintaining a reasonable level of safety. Despite these promising results, the study permits to highlight a certain number of limitations and some complexities, including among them the reliance on vector features, training restricted to a single map, and the lack of real-world validation. Addressing these aspects would definitely contribute to improve generalization and pave the way toward multi-agent scenarios and sim-to-real transfer. To conclude, this work demonstrates that reinforcement learning, when combined with appropriate structure, contextual information and a precise reward shaping, can effectively support both autonomous driving and intersection negotiation, offering a solid foundation for future work in learning-based autonomous driving.

Reinforcement Learning for Autonomous Driving in the Duckietown Environment

FORCELLA, FILIPPO
2024/2025

Abstract

This thesis investigates the use of reinforcement learning for autonomous driving in the Duckietown environment, with a particular focus on the problem of intersection handling. Starting from a PPO agent capable of correctly navigate through a simple map, the work progressively introduces structured mechanisms that allow the policy to approach and cross intersections in an environment in which another agent is present and relevant. A key component of this progression is the integration of information from this autonomous vehicle, which enables the agent to adjust its velocity based on relative positioning, the approaching trajectory, and estimated temporal advantage with respect to the conflict point. The proposed method is evaluated against two meaningful baselines: first, a standard lane-following controller and then a deterministic rule-based agent that stops whenever the other vehicle is closer to the crossing point or is already approaching the intersection. In this context, experimental analyses show that the learned policy is able to exhibit smoother and more anticipatory deceleration, improving stability in straight segments, and a more flexible and realistic behavior compared to the deterministic controller, while still maintaining a reasonable level of safety. Despite these promising results, the study permits to highlight a certain number of limitations and some complexities, including among them the reliance on vector features, training restricted to a single map, and the lack of real-world validation. Addressing these aspects would definitely contribute to improve generalization and pave the way toward multi-agent scenarios and sim-to-real transfer. To conclude, this work demonstrates that reinforcement learning, when combined with appropriate structure, contextual information and a precise reward shaping, can effectively support both autonomous driving and intersection negotiation, offering a solid foundation for future work in learning-based autonomous driving.
2024
Reinforcement Learning for Autonomous Driving in the Duckietown Environment
Duckietown
Autonomous Driving
Simulation
File in questo prodotto:
File Dimensione Formato  
ForcellaMasterThesis.pdf

accesso aperto

Dimensione 1.12 MB
Formato Adobe PDF
1.12 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/102108