Federated Continual Learning (FCL) aims to address the challenge of adapting the model to class distribution shifts over time while learning from non-IID data distributed across multiple clients. Building upon the FedSpace framework, this thesis explores the integration of Hyperbolic Prototypes (HP), Nearest Mean Prototype (NMP) Classifier, and Knowledge Distillation (KD) in an attempt to enhance classification accuracy and mitigate catastrophic forgetting in FCL. Hyperbolic prototypes are leveraged to capture the hierarchical structure of data representations, enabling a more compact and expressive representation of class relationships. The NMP classifier, known for its efficiency in continual learning, is employed to enhance classification performance by utilizing the class-wise mean of embeddings. Additionally, Knowledge Distillation is introduced to facilitate knowledge transfer between past and current models, aiming to retain previously learned information while accommodating new data. Experimental results reveal that the integration of hyperbolic prototypes and the NMP classifier did not lead to improvements in classification accuracy. These methods, while theoretically beneficial, struggled to provide benefits under the challenging conditions of FCL. In contrast, Knowledge Distillation demonstrated a slight improvement in performance, suggesting its potential in mitigating forgetting in FCL settings. This study provides a comprehensive analysis of these methods within the FedSpace framework, exploring the limitations and potential of hyperbolic spaces, prototype-based classifiers, and knowledge transfer techniques in advancing FCL systems.

Training Techniques for Federated Continual Learning

ZHANG, QIQI
2024/2025

Abstract

Federated Continual Learning (FCL) aims to address the challenge of adapting the model to class distribution shifts over time while learning from non-IID data distributed across multiple clients. Building upon the FedSpace framework, this thesis explores the integration of Hyperbolic Prototypes (HP), Nearest Mean Prototype (NMP) Classifier, and Knowledge Distillation (KD) in an attempt to enhance classification accuracy and mitigate catastrophic forgetting in FCL. Hyperbolic prototypes are leveraged to capture the hierarchical structure of data representations, enabling a more compact and expressive representation of class relationships. The NMP classifier, known for its efficiency in continual learning, is employed to enhance classification performance by utilizing the class-wise mean of embeddings. Additionally, Knowledge Distillation is introduced to facilitate knowledge transfer between past and current models, aiming to retain previously learned information while accommodating new data. Experimental results reveal that the integration of hyperbolic prototypes and the NMP classifier did not lead to improvements in classification accuracy. These methods, while theoretically beneficial, struggled to provide benefits under the challenging conditions of FCL. In contrast, Knowledge Distillation demonstrated a slight improvement in performance, suggesting its potential in mitigating forgetting in FCL settings. This study provides a comprehensive analysis of these methods within the FedSpace framework, exploring the limitations and potential of hyperbolic spaces, prototype-based classifiers, and knowledge transfer techniques in advancing FCL systems.
2024
Training Techniques for Federated Continual Learning
Federated Learning
Continual Learning
Hyperbolic Prototype
File in questo prodotto:
File Dimensione Formato  
Zhang_Qiqi.pdf

accesso aperto

Dimensione 964.55 kB
Formato Adobe PDF
964.55 kB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/84556