Nowadays deep learning systems are widely used in a lot of applications like IoT, law enforcement, industrial production, virtual assistant, autonomous driving, etc. Its potential stands on the ability to learn every complex task by training itself on a dataset that contains the input and the wanted output. A new approach called Federate Learning permits a group of users to collaboratively train a shared model with its own data without damaging privacy. Doing that, however, exposes the deep learning model to backdoor attacks whose main purpose is to disrupt the normal behavior of the model by solving a task in the wrong way or as the attacker wants. Existing defense fails for protecting from such attacks, in our work we propose a new approach based on TEE (Trust Execution Environment) which permits to integrate the local dataset from each user for detecting which model is compromised or not. We investigate the feasibility of possible directions consisting of analyzing the output of each individual layer’s model given as input the local dataset and we highlight the drawbacks of these approaches. Our work is thought for those people who want to go in the same direction and want to use our results as a start point for a possible enhancement otherwise could be used as a warning for not doing the same.

Nowadays deep learning systems are widely used in a lot of applications like IoT, law enforcement, industrial production, virtual assistant, autonomous driving, etc. Its potential stands on the ability to learn every complex task by training itself on a dataset that contains the input and the wanted output. A new approach called Federate Learning permits a group of users to collaboratively train a shared model with its own data without damaging privacy. Doing that, however, exposes the deep learning model to backdoor attacks whose main purpose is to disrupt the normal behavior of the model by solving a task in the wrong way or as the attacker wants. Existing defense fails for protecting from such attacks, in our work we propose a new approach based on TEE (Trust Execution Environment) which permits to integrate the local dataset from each user for detecting which model is compromised or not. We investigate the feasibility of possible directions consisting of analyzing the output of each individual layer’s model given as input the local dataset and we highlight the drawbacks of these approaches. Our work is thought for those people who want to go in the same direction and want to use our results as a start point for a possible enhancement otherwise could be used as a warning for not doing the same.

TEE Against Backdoor Attack in Federate Learning

BASTIANON, MATTIA
2021/2022

Abstract

Nowadays deep learning systems are widely used in a lot of applications like IoT, law enforcement, industrial production, virtual assistant, autonomous driving, etc. Its potential stands on the ability to learn every complex task by training itself on a dataset that contains the input and the wanted output. A new approach called Federate Learning permits a group of users to collaboratively train a shared model with its own data without damaging privacy. Doing that, however, exposes the deep learning model to backdoor attacks whose main purpose is to disrupt the normal behavior of the model by solving a task in the wrong way or as the attacker wants. Existing defense fails for protecting from such attacks, in our work we propose a new approach based on TEE (Trust Execution Environment) which permits to integrate the local dataset from each user for detecting which model is compromised or not. We investigate the feasibility of possible directions consisting of analyzing the output of each individual layer’s model given as input the local dataset and we highlight the drawbacks of these approaches. Our work is thought for those people who want to go in the same direction and want to use our results as a start point for a possible enhancement otherwise could be used as a warning for not doing the same.
2021
TEE Against Backdoor Attack in Federate Learning
Nowadays deep learning systems are widely used in a lot of applications like IoT, law enforcement, industrial production, virtual assistant, autonomous driving, etc. Its potential stands on the ability to learn every complex task by training itself on a dataset that contains the input and the wanted output. A new approach called Federate Learning permits a group of users to collaboratively train a shared model with its own data without damaging privacy. Doing that, however, exposes the deep learning model to backdoor attacks whose main purpose is to disrupt the normal behavior of the model by solving a task in the wrong way or as the attacker wants. Existing defense fails for protecting from such attacks, in our work we propose a new approach based on TEE (Trust Execution Environment) which permits to integrate the local dataset from each user for detecting which model is compromised or not. We investigate the feasibility of possible directions consisting of analyzing the output of each individual layer’s model given as input the local dataset and we highlight the drawbacks of these approaches. Our work is thought for those people who want to go in the same direction and want to use our results as a start point for a possible enhancement otherwise could be used as a warning for not doing the same.
Backdoor Attacks
Federate Learning
Machine Learning
File in questo prodotto:
File Dimensione Formato  
Bastianon_Mattia.pdf

accesso aperto

Dimensione 4.21 MB
Formato Adobe PDF
4.21 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/31549