One of the most critical challenges in modern urban society is related to traffic management, where too many inefficiencies are leading to unacceptable levels of road congestion, pollution and delays, both for vehicles and pedestrians. This thesis explores a novel approach to optimize traffic light control, exploiting Deep Reinforcement Learning (RL) techniques. The goal is to build an RL agent able to dynamically select the optimal traffic light phase and determine the appropriate duration for which to maintain it, eventually reducing traffic congestion and enhancing the overall traffic flow. The proposed RL agent has been trained to adapt to varying levels of traffic, ranging from light to moderate and eventually heavy levels of congestion, ensuring a stable and robust behavior under different scenarios. Additionally, this study analyzes the consequences of using different time intervals for the agent’s action, investigating how this affects the overall system performance. Finally, this study is distinguished from most of the works in the literature for the focus on vulnerable road users, specifically on pedestrians. In this context, during the decision-making process the model takes into consideration both vehicle and pedestrian flows, balancing their needs based on the relative assigned weight. An analysis was conducted on three different weight levels, aiming at finding a trade-off strategy able to ensure fairness of service both to drivers and pedestrians. The findings will highlight how RL – and specifically Deep RL techniques – provides a promising solution to traffic management, significantly enhancing urban mobility by reducing traffic jams and thus improving the overall experience for all the members involved.

One of the most critical challenges in modern urban society is related to traffic management, where too many inefficiencies are leading to unacceptable levels of road congestion, pollution and delays, both for vehicles and pedestrians. This thesis explores a novel approach to optimize traffic light control, exploiting Deep Reinforcement Learning (RL) techniques. The goal is to build an RL agent able to dynamically select the optimal traffic light phase and determine the appropriate duration for which to maintain it, eventually reducing traffic congestion and enhancing the overall traffic flow. The proposed RL agent has been trained to adapt to varying levels of traffic, ranging from light to moderate and eventually heavy levels of congestion, ensuring a stable and robust behavior under different scenarios. Additionally, this study analyzes the consequences of using different time intervals for the agent’s action, investigating how this affects the overall system performance. Finally, this study is distinguished from most of the works in the literature for the focus on vulnerable road users, specifically on pedestrians. In this context, during the decision-making process the model takes into consideration both vehicle and pedestrian flows, balancing their needs based on the relative assigned weight. An analysis was conducted on three different weight levels, aiming at finding a trade-off strategy able to ensure fairness of service both to drivers and pedestrians. The findings will highlight how RL – and specifically Deep RL techniques – provides a promising solution to traffic management, significantly enhancing urban mobility by reducing traffic jams and thus improving the overall experience for all the members involved.

Adaptive Traffic Light Control Using Double Deep Q-Networks: Balancing Efficiency and Fairness for Urban Mobility

SCATTO, GIACOMO
2023/2024

Abstract

One of the most critical challenges in modern urban society is related to traffic management, where too many inefficiencies are leading to unacceptable levels of road congestion, pollution and delays, both for vehicles and pedestrians. This thesis explores a novel approach to optimize traffic light control, exploiting Deep Reinforcement Learning (RL) techniques. The goal is to build an RL agent able to dynamically select the optimal traffic light phase and determine the appropriate duration for which to maintain it, eventually reducing traffic congestion and enhancing the overall traffic flow. The proposed RL agent has been trained to adapt to varying levels of traffic, ranging from light to moderate and eventually heavy levels of congestion, ensuring a stable and robust behavior under different scenarios. Additionally, this study analyzes the consequences of using different time intervals for the agent’s action, investigating how this affects the overall system performance. Finally, this study is distinguished from most of the works in the literature for the focus on vulnerable road users, specifically on pedestrians. In this context, during the decision-making process the model takes into consideration both vehicle and pedestrian flows, balancing their needs based on the relative assigned weight. An analysis was conducted on three different weight levels, aiming at finding a trade-off strategy able to ensure fairness of service both to drivers and pedestrians. The findings will highlight how RL – and specifically Deep RL techniques – provides a promising solution to traffic management, significantly enhancing urban mobility by reducing traffic jams and thus improving the overall experience for all the members involved.
2023
Intelligent Traffic Light Control Using Reinforcement Learning: Balancing Efficiency and Fairness for Urban Mobility
One of the most critical challenges in modern urban society is related to traffic management, where too many inefficiencies are leading to unacceptable levels of road congestion, pollution and delays, both for vehicles and pedestrians. This thesis explores a novel approach to optimize traffic light control, exploiting Deep Reinforcement Learning (RL) techniques. The goal is to build an RL agent able to dynamically select the optimal traffic light phase and determine the appropriate duration for which to maintain it, eventually reducing traffic congestion and enhancing the overall traffic flow. The proposed RL agent has been trained to adapt to varying levels of traffic, ranging from light to moderate and eventually heavy levels of congestion, ensuring a stable and robust behavior under different scenarios. Additionally, this study analyzes the consequences of using different time intervals for the agent’s action, investigating how this affects the overall system performance. Finally, this study is distinguished from most of the works in the literature for the focus on vulnerable road users, specifically on pedestrians. In this context, during the decision-making process the model takes into consideration both vehicle and pedestrian flows, balancing their needs based on the relative assigned weight. An analysis was conducted on three different weight levels, aiming at finding a trade-off strategy able to ensure fairness of service both to drivers and pedestrians. The findings will highlight how RL – and specifically Deep RL techniques – provides a promising solution to traffic management, significantly enhancing urban mobility by reducing traffic jams and thus improving the overall experience for all the members involved.
Reinforcement
Deep RL
Double DQN
Traffic Management
Sumo
File in questo prodotto:
File Dimensione Formato  
Scatto_Giacomo.pdf

accesso aperto

Dimensione 12.88 MB
Formato Adobe PDF
12.88 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/80903