In recent years, there have been significant advancements in the field of Deep Learning and also in that of Remote Sensing (RS) technologies. RS image recognition models based on deep convolution neural networks outperform traditional hand-craft feature techniques, and thus are broadly used across different applications. However, previous research shows that deep learning models are susceptible to adversarial attacks. These attacks, in the field of computer vision classification, are subtle perturbations of the input images that lead a model to mis-classifications, often with high confidence. This poses a particularly concerning problem in remote sensing, which is used in critical areas like military applications. We review the work that design adversarial attacks and we analyze the existence of such attacks in the context of remote sensing images. The results show that RSI recognition models are also vulnerable to adversarial examples; moreover these adversarial examples are able, to some extent, to ``transfer'' across different model, which means that they can work also on models they were not crafted for. Adversarial examples in RSI recognition are of great significance for the security of remote sensing applications, revealing a huge potential for future research.

Investigating Adversarial Attacks on Deep Learning Models for RGB Remote Sensing Image Classification

MATTELIGH, ELISA
2023/2024

Abstract

In recent years, there have been significant advancements in the field of Deep Learning and also in that of Remote Sensing (RS) technologies. RS image recognition models based on deep convolution neural networks outperform traditional hand-craft feature techniques, and thus are broadly used across different applications. However, previous research shows that deep learning models are susceptible to adversarial attacks. These attacks, in the field of computer vision classification, are subtle perturbations of the input images that lead a model to mis-classifications, often with high confidence. This poses a particularly concerning problem in remote sensing, which is used in critical areas like military applications. We review the work that design adversarial attacks and we analyze the existence of such attacks in the context of remote sensing images. The results show that RSI recognition models are also vulnerable to adversarial examples; moreover these adversarial examples are able, to some extent, to ``transfer'' across different model, which means that they can work also on models they were not crafted for. Adversarial examples in RSI recognition are of great significance for the security of remote sensing applications, revealing a huge potential for future research.
2023
Investigating Adversarial Attacks on Deep Learning Models for RGB Remote Sensing Image Classification
Adversarial attacks
Remote sensing
Deep learning
File in questo prodotto:
File Dimensione Formato  
ElisaMatteligh_Tesi.pdf

accesso riservato

Dimensione 2.9 MB
Formato Adobe PDF
2.9 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/64793