In recent years, Mobile Edge Computing (MEC) has emerged as a promising network architecture that offers considerable computing capabilities in the proximity of mobile devices. In this way, MEC responds to the growing needs of various real-time applications that require both significant computational capacity and low latency. However, the efficient allocation of resources in MEC environments still remains a critical challenge, especially due to the shared nature of resources, often contested among multiple users. In this context, Reinforcement Learning (RL) algorithms have proven to be an effective solution to optimize resource management. However, existing studies tend to overlook the consumption costs associated with the training of such algorithms. Indeed, in a system in which users and the learning process share the same pool of resources, competition may arise in order to meet their respective needs. On the one hand, it is essential to respond to the users' demands, on the other hand, it is indispensable to continue to improve the RL strategy in order to secure a higher return in the long run. This trade-off is called cost of learning. In this thesis, we will analyze the cost of learning by comparing different resource allocation strategies within a simulated MEC environment. Furthermore, we will propose several effective strategies to adjust the frequency of training, thus significantly reducing the training impact and ensuring high user performance. Our results will show how the cost of learning is fundamental and non-negligible, especially in dynamic environments where continual training becomes necessary.

Cost of Learning in Mobile Edge Computing

BOSCARO, MADDALENA
2023/2024

Abstract

In recent years, Mobile Edge Computing (MEC) has emerged as a promising network architecture that offers considerable computing capabilities in the proximity of mobile devices. In this way, MEC responds to the growing needs of various real-time applications that require both significant computational capacity and low latency. However, the efficient allocation of resources in MEC environments still remains a critical challenge, especially due to the shared nature of resources, often contested among multiple users. In this context, Reinforcement Learning (RL) algorithms have proven to be an effective solution to optimize resource management. However, existing studies tend to overlook the consumption costs associated with the training of such algorithms. Indeed, in a system in which users and the learning process share the same pool of resources, competition may arise in order to meet their respective needs. On the one hand, it is essential to respond to the users' demands, on the other hand, it is indispensable to continue to improve the RL strategy in order to secure a higher return in the long run. This trade-off is called cost of learning. In this thesis, we will analyze the cost of learning by comparing different resource allocation strategies within a simulated MEC environment. Furthermore, we will propose several effective strategies to adjust the frequency of training, thus significantly reducing the training impact and ensuring high user performance. Our results will show how the cost of learning is fundamental and non-negligible, especially in dynamic environments where continual training becomes necessary.
2023
Cost of Learning in Mobile Edge Computing
RL
Scheduling
Cost of learning
File in questo prodotto:
File Dimensione Formato  
Boscaro_Maddalena.pdf

accesso aperto

Dimensione 2.16 MB
Formato Adobe PDF
2.16 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/72822