In recent years, alongside the widespread diffusion and adoption of Artificial Intelligence applications, companies have started to rely more and more on third parties, known as Machine Learning as a Service (MLaaS) providers, to outsource the training and deployment of Machine Learning (ML) models, thus avoiding the need for in-house experts. With the adoption of these new technologies, and given the high energy consumption of modern ML models - which highly influences their deployment costs - it has become paramount to study their safety, robustness, and security from an energy-latency perspective. Framed in this setting, in this Thesis, I focus on the security of an often overlooked phase of the development of a Machine Learning model: the Model Selection (MS), whose behavior may spell the difference between choosing a generalized, high-performing model and one with high energy consumption, making its deployment more expensive. Therefore, I ideate and carry out a first-of-its-kind attack, called Model Selection Hijacking Attack (MSHA) on the MS phase, evaluating its impact and realizability. With MSHA, to my knowledge, I am the first to introduce an attack aiming at hijacking the MS phase forcing it to choose a model with a high value in a hijack metric arbitrarily selected, just by injecting poisoned data into the validation set, without tampering with the training set, the model parameters or the Model Selection algorithm itself. Following the research in energy-latency attacks, I choose a hijack metric with the goal of selecting a model with high energy consumption, goal that I am able to achieve, against models trained on MNIST and CIFAR10, with a success rate up to 98.6%.

Model Selection Hijacking Attack

PETRUCCI, RICCARDO
2023/2024

Abstract

In recent years, alongside the widespread diffusion and adoption of Artificial Intelligence applications, companies have started to rely more and more on third parties, known as Machine Learning as a Service (MLaaS) providers, to outsource the training and deployment of Machine Learning (ML) models, thus avoiding the need for in-house experts. With the adoption of these new technologies, and given the high energy consumption of modern ML models - which highly influences their deployment costs - it has become paramount to study their safety, robustness, and security from an energy-latency perspective. Framed in this setting, in this Thesis, I focus on the security of an often overlooked phase of the development of a Machine Learning model: the Model Selection (MS), whose behavior may spell the difference between choosing a generalized, high-performing model and one with high energy consumption, making its deployment more expensive. Therefore, I ideate and carry out a first-of-its-kind attack, called Model Selection Hijacking Attack (MSHA) on the MS phase, evaluating its impact and realizability. With MSHA, to my knowledge, I am the first to introduce an attack aiming at hijacking the MS phase forcing it to choose a model with a high value in a hijack metric arbitrarily selected, just by injecting poisoned data into the validation set, without tampering with the training set, the model parameters or the Model Selection algorithm itself. Following the research in energy-latency attacks, I choose a hijack metric with the goal of selecting a model with high energy consumption, goal that I am able to achieve, against models trained on MNIST and CIFAR10, with a success rate up to 98.6%.
2023
Model Selection Hijacking Attack
Adversarial ML
Sponge Attack
Poisoning Attack
Model Selection
Energy Attack
File in questo prodotto:
File Dimensione Formato  
Petrucci_Riccardo.pdf

accesso riservato

Dimensione 2.67 MB
Formato Adobe PDF
2.67 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/71045