The diffusion of machine learning technologies and their applications is increasing nowadays, becoming more and more important in everyday life. One area of interest in this field is adversarial machine learning, where models are exposed to perturbed input data, known as "adversarial examples", to induce classification errors. This causes a serious threat to the reliability and security of models, especially when used in realworld applications. This thesis will investigate the generation of adversarial examples to attack deep neural networks used for image classification. The aim will be to minimise a blackbox function using algorithms developed for structured optimisation. The formulation of the underlying mathematical model allows us to solve largescale instances of the problem and find sparse solutions, generating adversarial examples that are very similar to the original images.
The diffusion of machine learning technologies and their applications is increasing nowadays, becoming more and more important in everyday life. One area of interest in this field is adversarial machine learning, where models are exposed to perturbed input data, known as "adversarial examples", to induce classification errors. This causes a serious threat to the reliability and security of models, especially when used in realworld applications. This thesis will investigate the generation of adversarial examples to attack deep neural networks used for image classification. The aim will be to minimise a blackbox function using algorithms developed for structured optimisation. The formulation of the underlying mathematical model allows us to solve largescale instances of the problem and find sparse solutions, generating adversarial examples that are very similar to the original images.
A ZerothOrder Method for Adversarial Attacks
CORRADO, LORENZO
2022/2023
Abstract
The diffusion of machine learning technologies and their applications is increasing nowadays, becoming more and more important in everyday life. One area of interest in this field is adversarial machine learning, where models are exposed to perturbed input data, known as "adversarial examples", to induce classification errors. This causes a serious threat to the reliability and security of models, especially when used in realworld applications. This thesis will investigate the generation of adversarial examples to attack deep neural networks used for image classification. The aim will be to minimise a blackbox function using algorithms developed for structured optimisation. The formulation of the underlying mathematical model allows us to solve largescale instances of the problem and find sparse solutions, generating adversarial examples that are very similar to the original images.File  Dimensione  Formato  

A_Zeroth_Order_Method_for_Adversarial_Attacks_Lorenzo_Corrado.pdf
accesso riservato
Dimensione
3.69 MB
Formato
Adobe PDF

3.69 MB  Adobe PDF 
The text of this website © Università degli studi di Padova. Full Text are published under a nonexclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/46199