In the Teleoperated Driving (TD) scenario strict constraints in Quality of Service (QoS) indicators, especially end-to-end (E2E) latency and reliability, of the communication between vehicles and remote drivers must be satisfied. Predictive Quality of Service (PQoS) is a tool to predict network degradation and react accordingly. In this context, Artificial Intelligence (AI) can be used to optimize PQoS operations. However, there are several trade-offs between centralized and decentralized Reinforcement Learning (RL) solutions, which call for additional work in this area. The first goal of this thesis is the introduction of realism in the learning phase with respect to data and metrics (from the 5G protocol stack) gathered by the intelligent agent at the Radio Access Network (RAN) (centralized case) or at the vehicle (decentralized case), including the modelling of communication and computational mechanisms to obtain those metrics in the ns-3 simulator. To further investigate the trade-off between performance and communication overhead, a federated learning framework is evaluated under different strategies of parameters aggregation based on the vehicles status, involving also compression (pruning, quantization and clustering) of the parameters in the agent’s model to optimize the channel burden. The second objective is the implementation of a new meta-learning agent that can dynamically choose when to perform the centralized vs. decentralized learning models, depending on the network status. Starting from the global condition of the vehicles described in terms of network metrics, the optimal learning approach is chosen to maximize at the same time QoS and Quality of Experience (QoE). In a scenario with 3 vehicles, the centralized learning approach achieves the best compromise between QoS, with an average E2E delay of nearly 25 ms, and QoE, with an average mean average precision (mAP) of 0.68, considering the best realistic configuration. The distributed approach is able to reduce further the latency by 1 ms, at the cost of poorer mAP (0.67) and more violations of the maximum tolerated E2E delay. The federated approach increases the total delay by 3 ms due to the models sharing, while the QoE shows a significant improvement, exceeding 0.68 with mAP. The meta-learning agent achieves outstanding results, selecting autonomously the centralized approach in good channel conditions and the decentralized approach in a degraded network state.

Design, Implementation and Evaluation of Learning Algorithms for Predictive Quality of Service in Teleoperated Driving Scenarios

AVANZI, GIACOMO
2023/2024

Abstract

In the Teleoperated Driving (TD) scenario strict constraints in Quality of Service (QoS) indicators, especially end-to-end (E2E) latency and reliability, of the communication between vehicles and remote drivers must be satisfied. Predictive Quality of Service (PQoS) is a tool to predict network degradation and react accordingly. In this context, Artificial Intelligence (AI) can be used to optimize PQoS operations. However, there are several trade-offs between centralized and decentralized Reinforcement Learning (RL) solutions, which call for additional work in this area. The first goal of this thesis is the introduction of realism in the learning phase with respect to data and metrics (from the 5G protocol stack) gathered by the intelligent agent at the Radio Access Network (RAN) (centralized case) or at the vehicle (decentralized case), including the modelling of communication and computational mechanisms to obtain those metrics in the ns-3 simulator. To further investigate the trade-off between performance and communication overhead, a federated learning framework is evaluated under different strategies of parameters aggregation based on the vehicles status, involving also compression (pruning, quantization and clustering) of the parameters in the agent’s model to optimize the channel burden. The second objective is the implementation of a new meta-learning agent that can dynamically choose when to perform the centralized vs. decentralized learning models, depending on the network status. Starting from the global condition of the vehicles described in terms of network metrics, the optimal learning approach is chosen to maximize at the same time QoS and Quality of Experience (QoE). In a scenario with 3 vehicles, the centralized learning approach achieves the best compromise between QoS, with an average E2E delay of nearly 25 ms, and QoE, with an average mean average precision (mAP) of 0.68, considering the best realistic configuration. The distributed approach is able to reduce further the latency by 1 ms, at the cost of poorer mAP (0.67) and more violations of the maximum tolerated E2E delay. The federated approach increases the total delay by 3 ms due to the models sharing, while the QoE shows a significant improvement, exceeding 0.68 with mAP. The meta-learning agent achieves outstanding results, selecting autonomously the centralized approach in good channel conditions and the decentralized approach in a degraded network state.
2023
Design, Implementation and Evaluation of Learning Algorithms for Predictive Quality of Service in Teleoperated Driving Scenarios
Teleoperated Driving
Predictive QoS
RL
Federated learning
ns-3
File in questo prodotto:
File Dimensione Formato  
Avanzi_Giacomo.pdf

accesso aperto

Dimensione 4.2 MB
Formato Adobe PDF
4.2 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/69283