Distributed learning, especially in federated learning, allows multiple participants to train a model collaboratively without sharing their raw data. While this approach has significant privacy advantages, it also brings challenges that can compromise data confidentiality, model reliability, and system security. In this thesis, I explore the key privacy and security issues in distributed learning, such as data leakage, membership inference, poisoning attacks, and risks during communication. I also review current solutions, including differential privacy, secure multi-party computation, robust aggregation methods, and ways to detect and prevent attacks. My research focuses on finding a balance between protecting privacy and maintaining the accuracy of the model. I analyze how techniques like encryption and adding noise can safeguard sensitive information while examining their trade-offs. Additionally, I look into security measures, like attack detection and mitigation, to ensure the learning process remains trustworthy. I propose a combined approach that integrates both privacy and security measures to address these issues more effectively. Using detailed examples, simulations, and case studies, this thesis shows how these strategies can strengthen distributed learning systems against threats while keeping them efficient. The goal of this research is to offer practical insights into creating safer and more reliable distributed learning systems, contributing to the growing field of privacy-preserving artificial intelligence.
Privacy and security in distributed learning: common threats and countermeasures
HABIB, ALAA MOHAMED ABDELHAMID OTHMAN
2024/2025
Abstract
Distributed learning, especially in federated learning, allows multiple participants to train a model collaboratively without sharing their raw data. While this approach has significant privacy advantages, it also brings challenges that can compromise data confidentiality, model reliability, and system security. In this thesis, I explore the key privacy and security issues in distributed learning, such as data leakage, membership inference, poisoning attacks, and risks during communication. I also review current solutions, including differential privacy, secure multi-party computation, robust aggregation methods, and ways to detect and prevent attacks. My research focuses on finding a balance between protecting privacy and maintaining the accuracy of the model. I analyze how techniques like encryption and adding noise can safeguard sensitive information while examining their trade-offs. Additionally, I look into security measures, like attack detection and mitigation, to ensure the learning process remains trustworthy. I propose a combined approach that integrates both privacy and security measures to address these issues more effectively. Using detailed examples, simulations, and case studies, this thesis shows how these strategies can strengthen distributed learning systems against threats while keeping them efficient. The goal of this research is to offer practical insights into creating safer and more reliable distributed learning systems, contributing to the growing field of privacy-preserving artificial intelligence.| File | Dimensione | Formato | |
|---|---|---|---|
|
Habib_Alaa.pdf.pdf
accesso aperto
Dimensione
275.22 kB
Formato
Adobe PDF
|
275.22 kB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/89357