There has been a lot of interest in privacy-preserving federated learning because of its potential to allow collaborative model training without compromising participants' privacy. When it comes to federated learning that respects users' privacy, this thesis examines a wide range of possible protection and attack tactics. First, I introduce the idea of privacy-protecting federated learning and discuss its structure, benefits, and drawbacks. Differential privacy, secure aggregation, and homomorphic encryption are only some of the defensive mechanisms I cover next to keep participants' information private. In addition, I look at the attack methods, such as membership inference and model inversion, that potentially jeopardize participants' privacy in privacy-preserving federated learning. I examine the result of model inversion attack and the measures taken to counter them. In this thesis, I consider three distinct industrial use cases from the DAIS project which will be used in real-world applications in a near future and implement a federated learning system for them while keeping in mind the need for privacy in federated learning environments. As a further step, I suggest a new client selection method based on each client's amount of data to improve the federated learning framework's accuracy and the efficacy of its communications. Also, I propose an innovative method, Parameter Randomization, to enhance the privacy and communication efficiency of federated learning systems. By introducing these two approaches, this thesis gives a thorough explanation of the field of privacy-preserving and communication-efficient federated learning and emphasizes the need for robust defense and mitigation mechanisms to protect participant privacy against attacks while keeping the accuracy of the models as high as possible.

There has been a lot of interest in privacy-preserving federated learning because of its potential to allow collaborative model training without compromising participants' privacy. When it comes to federated learning that respects users' privacy, this thesis examines a wide range of possible protection and attack tactics. First, I introduce the idea of privacy-protecting federated learning and discuss its structure, benefits, and drawbacks. Differential privacy, secure aggregation, and homomorphic encryption are only some of the defensive mechanisms I cover next to keep participants' information private. In addition, I look at the attack methods, such as membership inference and model inversion, that potentially jeopardize participants' privacy in privacy-preserving federated learning. I examine the result of model inversion attack and the measures taken to counter them. In this thesis, I consider three distinct industrial use cases from the DAIS project which will be used in real-world applications in a near future and implement a federated learning system for them while keeping in mind the need for privacy in federated learning environments. As a further step, I suggest a new client selection method based on each client's amount of data to improve the federated learning framework's accuracy and the efficacy of its communications. Also, I propose an innovative method, Parameter Randomization, to enhance the privacy and communication efficiency of federated learning systems. By introducing these two approaches, this thesis gives a thorough explanation of the field of privacy-preserving and communication-efficient federated learning and emphasizes the need for robust defense and mitigation mechanisms to protect participant privacy against attacks while keeping the accuracy of the models as high as possible.

A Privacy-preserving and Communication-Efficient Federated Learning solution for Industrial Applications

MOHAMMADI, MOHAMMADREZA
2022/2023

Abstract

There has been a lot of interest in privacy-preserving federated learning because of its potential to allow collaborative model training without compromising participants' privacy. When it comes to federated learning that respects users' privacy, this thesis examines a wide range of possible protection and attack tactics. First, I introduce the idea of privacy-protecting federated learning and discuss its structure, benefits, and drawbacks. Differential privacy, secure aggregation, and homomorphic encryption are only some of the defensive mechanisms I cover next to keep participants' information private. In addition, I look at the attack methods, such as membership inference and model inversion, that potentially jeopardize participants' privacy in privacy-preserving federated learning. I examine the result of model inversion attack and the measures taken to counter them. In this thesis, I consider three distinct industrial use cases from the DAIS project which will be used in real-world applications in a near future and implement a federated learning system for them while keeping in mind the need for privacy in federated learning environments. As a further step, I suggest a new client selection method based on each client's amount of data to improve the federated learning framework's accuracy and the efficacy of its communications. Also, I propose an innovative method, Parameter Randomization, to enhance the privacy and communication efficiency of federated learning systems. By introducing these two approaches, this thesis gives a thorough explanation of the field of privacy-preserving and communication-efficient federated learning and emphasizes the need for robust defense and mitigation mechanisms to protect participant privacy against attacks while keeping the accuracy of the models as high as possible.
2022
A Privacy-preserving and Communication-Efficient Federated Learning solution for Industrial Applications
There has been a lot of interest in privacy-preserving federated learning because of its potential to allow collaborative model training without compromising participants' privacy. When it comes to federated learning that respects users' privacy, this thesis examines a wide range of possible protection and attack tactics. First, I introduce the idea of privacy-protecting federated learning and discuss its structure, benefits, and drawbacks. Differential privacy, secure aggregation, and homomorphic encryption are only some of the defensive mechanisms I cover next to keep participants' information private. In addition, I look at the attack methods, such as membership inference and model inversion, that potentially jeopardize participants' privacy in privacy-preserving federated learning. I examine the result of model inversion attack and the measures taken to counter them. In this thesis, I consider three distinct industrial use cases from the DAIS project which will be used in real-world applications in a near future and implement a federated learning system for them while keeping in mind the need for privacy in federated learning environments. As a further step, I suggest a new client selection method based on each client's amount of data to improve the federated learning framework's accuracy and the efficacy of its communications. Also, I propose an innovative method, Parameter Randomization, to enhance the privacy and communication efficiency of federated learning systems. By introducing these two approaches, this thesis gives a thorough explanation of the field of privacy-preserving and communication-efficient federated learning and emphasizes the need for robust defense and mitigation mechanisms to protect participant privacy against attacks while keeping the accuracy of the models as high as possible.
Federated Learning
Differential Privacy
Homomorphic
Encryption
Randomization
File in questo prodotto:
File Dimensione Formato  
Mohammadi_Mohammadreza.pdf

Open Access dal 03/04/2024

Dimensione 5.25 MB
Formato Adobe PDF
5.25 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/45147