Distributed learning (DL) has emerged as a pivotal solution for addressing the challenges associated with large datasets, complex models, and computationally intensive tasks. Its methodologies are extensively applied in diverse domains, including Smart Grids, Healthcare, and the Industrial Internet of Things (IIoT). Federated learning (FL), a subset of DL, specifically addresses privacy concerns by enabling agents to locally optimize models based on their data, sharing only model parameters with a central server rather than the raw data. Despite these advancements, the distributed nature of FL remains susceptible to faults induced by malicious agents, such as Byzantine faults. This thesis focuses on mitigating Byzantine faults in FL. First, I introduce a novel attack scenario that reveals the vulnerabilities of existing defense mechanisms against Byzantine agents. To counter this and other state-of-the-art attacks, I propose a new algorithm designed to enhance the robustness of FL systems against Byzantine threats. The robustness of the proposed defense and the convergence of the proposed algorithm have been rigorously validated through theoretical proofs. Finally, by simulating various attack and defense scenarios, the effectiveness of both the proposed attack and defense mechanisms is demonstrated.
Il Distributed Learning (DL) è emerso come una soluzione cruciale per affrontare le sfide associate a grandi dataset, modelli complessi e compiti computazionalmente intensivi. Le sue metodologie sono ampiamente applicate in diversi ambiti, tra cui Smart Grids, Healthcare e l'Industrial Internet of Things (IIoT). Il Federated Learning (FL), un sottoinsieme del DL, affronta specificamente le preoccupazioni legate alla privacy consentendo agli agenti di ottimizzare localmente i modelli basati sui propri dati, condividendo solo i parametri del modello con un server centrale anziché i dati grezzi. Nonostante questi progressi, la natura distribuita del FL rimane suscettibile a guasti indotti da agenti malintenzionati, come i guasti bizantini. Questa tesi si concentra sulla mitigazione dei guasti bizantini nel FL. In primo luogo, introduco un nuovo scenario di attacco che rivela le vulnerabilità dei meccanismi di difesa esistenti contro gli agenti bizantini. Per contrastare questo e altri attacchi all'avanguardia, propongo un nuovo algoritmo progettato per migliorare la robustezza dei sistemi FL contro le minacce bizantine. La robustezza della difesa proposta e la convergenza dell'algoritmo proposto sono state rigorosamente validate attraverso prove teoriche. Infine, simulando vari scenari di attacco e difesa, viene dimostrata l'efficacia sia dei meccanismi di attacco proposti che di quelli di difesa.
Byzantine-Resilient Federated Learning
PASDAR, ABBAS
2023/2024
Abstract
Distributed learning (DL) has emerged as a pivotal solution for addressing the challenges associated with large datasets, complex models, and computationally intensive tasks. Its methodologies are extensively applied in diverse domains, including Smart Grids, Healthcare, and the Industrial Internet of Things (IIoT). Federated learning (FL), a subset of DL, specifically addresses privacy concerns by enabling agents to locally optimize models based on their data, sharing only model parameters with a central server rather than the raw data. Despite these advancements, the distributed nature of FL remains susceptible to faults induced by malicious agents, such as Byzantine faults. This thesis focuses on mitigating Byzantine faults in FL. First, I introduce a novel attack scenario that reveals the vulnerabilities of existing defense mechanisms against Byzantine agents. To counter this and other state-of-the-art attacks, I propose a new algorithm designed to enhance the robustness of FL systems against Byzantine threats. The robustness of the proposed defense and the convergence of the proposed algorithm have been rigorously validated through theoretical proofs. Finally, by simulating various attack and defense scenarios, the effectiveness of both the proposed attack and defense mechanisms is demonstrated.File | Dimensione | Formato | |
---|---|---|---|
Pasdar_Abbas.pdf
accesso riservato
Dimensione
3.32 MB
Formato
Adobe PDF
|
3.32 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/69289