Asynchronous Federated Continual Learning (AFCL) represents a realistic yet chal- lenging scenario where heterogeneous clients learn distinct tasks over time without central coordination. While prototype-based methods like FedSpace effectively address catastrophic forgetting, they often suffer from high computational costs and performance degradation in non-IID settings. This thesis proposes a framework designed to reconcile efficiency and robustness. First, we introduce FedAlt (Fed- erated Adaptive Local Training), a strategy that accelerates convergence by dynam- ically adjusting local training steps, significantly reducing communication overhead. To recover the accuracy loss induced by accelerated training, we propose FedSpo (Federated Server-Side Prototype Optimization), a privacy-preserving module where the server refines the global model using synthetic data generated from global pro- totypes. We investigate advanced tuning strategies, including Quality Filtering for medium fragmentation and Soft Mixup for extreme fragmentation. Experiments on CIFAR-100 (with 50, 100, and 500 clients) demonstrate that our approach estab- lishes a highly effective trade-off between computational efficiency and classification accuracy compared to state of the art baselines

Asynchronous Federated Continual Learning (AFCL) represents a realistic yet chal- lenging scenario where heterogeneous clients learn distinct tasks over time without central coordination. While prototype-based methods like FedSpace effectively address catastrophic forgetting, they often suffer from high computational costs and performance degradation in non-IID settings. This thesis proposes a framework designed to reconcile efficiency and robustness. First, we introduce FedAlt (Fed- erated Adaptive Local Training), a strategy that accelerates convergence by dynam- ically adjusting local training steps, significantly reducing communication overhead. To recover the accuracy loss induced by accelerated training, we propose FedSpo (Federated Server-Side Prototype Optimization), a privacy-preserving module where the server refines the global model using synthetic data generated from global pro- totypes. We investigate advanced tuning strategies, including Quality Filtering for medium fragmentation and Soft Mixup for extreme fragmentation. Experiments on CIFAR-100 (with 50, 100, and 500 clients) demonstrate that our approach estab- lishes a highly effective trade-off between computational efficiency and classification accuracy compared to state of the art baselines

Adaptive Local Steps and Server-Side Optimization for Asynchronous Federated Continual Learning

ANTONELLI, DAVIDE
2025/2026

Abstract

Asynchronous Federated Continual Learning (AFCL) represents a realistic yet chal- lenging scenario where heterogeneous clients learn distinct tasks over time without central coordination. While prototype-based methods like FedSpace effectively address catastrophic forgetting, they often suffer from high computational costs and performance degradation in non-IID settings. This thesis proposes a framework designed to reconcile efficiency and robustness. First, we introduce FedAlt (Fed- erated Adaptive Local Training), a strategy that accelerates convergence by dynam- ically adjusting local training steps, significantly reducing communication overhead. To recover the accuracy loss induced by accelerated training, we propose FedSpo (Federated Server-Side Prototype Optimization), a privacy-preserving module where the server refines the global model using synthetic data generated from global pro- totypes. We investigate advanced tuning strategies, including Quality Filtering for medium fragmentation and Soft Mixup for extreme fragmentation. Experiments on CIFAR-100 (with 50, 100, and 500 clients) demonstrate that our approach estab- lishes a highly effective trade-off between computational efficiency and classification accuracy compared to state of the art baselines
2025
Adaptive Local Steps and Server-Side Optimization for Asynchronous Federated Continual Learning
Asynchronous Federated Continual Learning (AFCL) represents a realistic yet chal- lenging scenario where heterogeneous clients learn distinct tasks over time without central coordination. While prototype-based methods like FedSpace effectively address catastrophic forgetting, they often suffer from high computational costs and performance degradation in non-IID settings. This thesis proposes a framework designed to reconcile efficiency and robustness. First, we introduce FedAlt (Fed- erated Adaptive Local Training), a strategy that accelerates convergence by dynam- ically adjusting local training steps, significantly reducing communication overhead. To recover the accuracy loss induced by accelerated training, we propose FedSpo (Federated Server-Side Prototype Optimization), a privacy-preserving module where the server refines the global model using synthetic data generated from global pro- totypes. We investigate advanced tuning strategies, including Quality Filtering for medium fragmentation and Soft Mixup for extreme fragmentation. Experiments on CIFAR-100 (with 50, 100, and 500 clients) demonstrate that our approach estab- lishes a highly effective trade-off between computational efficiency and classification accuracy compared to state of the art baselines
AFCL
Prototypes
Adaptivity
Server Optimization
Server Tuning
File in questo prodotto:
File Dimensione Formato  
antonelli_davide.pdf

accesso aperto

Dimensione 7.78 MB
Formato Adobe PDF
7.78 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/106478