Federated Learning (FL) allows multiple clients to collaboratively train machine learning models without sharing raw data, thereby preserving data locality. However, the exchange of model updates can still leak sensitive information. To address this challenge, Differential Privacy (DP) provides a formal mathematical framework that protects individual data contributions by injecting controlled random noise. This thesis aims to study and implement additive noise mechanisms, specifically Gaussian and Laplacian mechanisms, within both server-based and serverless federated learning architectures. The research focuses on two protection levels: sample-level and client-level differential privacy. A unified experimental setup is designed to compare how these mechanisms influence model convergence, stability, and privacy–utility trade-offs under different system structures. The study is expected to provide insights into how noise distribution, privacy granularity, and communication topology jointly affect performance in federated learning. The ultimate goal is to identify design guidelines for implementing differentially private FL systems that balance privacy guarantees with learning efficiency in both centralized and decentralized environments.

Differentially Private Federated Learning: Study and Implementation of Additive Noise Mechanisms.

GUO, HONGYU
2024/2025

Abstract

Federated Learning (FL) allows multiple clients to collaboratively train machine learning models without sharing raw data, thereby preserving data locality. However, the exchange of model updates can still leak sensitive information. To address this challenge, Differential Privacy (DP) provides a formal mathematical framework that protects individual data contributions by injecting controlled random noise. This thesis aims to study and implement additive noise mechanisms, specifically Gaussian and Laplacian mechanisms, within both server-based and serverless federated learning architectures. The research focuses on two protection levels: sample-level and client-level differential privacy. A unified experimental setup is designed to compare how these mechanisms influence model convergence, stability, and privacy–utility trade-offs under different system structures. The study is expected to provide insights into how noise distribution, privacy granularity, and communication topology jointly affect performance in federated learning. The ultimate goal is to identify design guidelines for implementing differentially private FL systems that balance privacy guarantees with learning efficiency in both centralized and decentralized environments.
2024
Differentially Private Federated Learning: Study and Implementation of Additive Noise Mechanisms.
Federated Learning
Differential Privacy
Serverless Learning
Additive Noise
Privacy Preservation
File in questo prodotto:
File Dimensione Formato  
Differentially_Private_Federated_Learning_Study_and_Implementation_of_Additive_Noise_Mechanisms.pdf

accesso aperto

Dimensione 3.1 MB
Formato Adobe PDF
3.1 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/102112