Over the last few years, rapid advances in deep learning (DL) result in new techniques and tools that have recently drawn a lot of interest as new effective approaches to provide solutions in several study domains. Though the technology is primarily employed for legitimate applications such as for autonomous vehicles and entertainment, malicious users have also exploited them for unlawful or wicked purposes. For example, high-quality and realistic fake videos, images, or audios have been generated to diffuse misinformation and propaganda or even foment political discord and hate, making a public figure a convenient target for these types of forgeries. This manipulated content has become known recently as deepfake. Therefore, there is a need to identify such samples so automated solutions employed by DL's techniques can be an effective method. The DL systems black-box nature allows for robust predictions but they cannot be completely trusted. DL's current inability to explain its own decisions to human users limits the efficacy of these systems. The fundamental issue is gaining the trust of human agents so the construction of interpretable and also easily explainable solutions is imperative. In this work, we investigate some explainable artificial intelligence (XAI) approaches in order to improve our ability to interpret deepfakes and the deepfake detection process. In particular, we conduct studies, in a classification scenario of fake and real face images, regarding deepfake detectors and their correlations in prediction. Our goal is better understand if different deepfake detection models use the same informations to take a decision in a classification context. By doing so, we want to understand what part of images deepfake detectors use for classification and if different deepfake detectors take similar parts of samples into account for the same classification. In order to achieve our goal, we employ some analysis on the coherence of different deepfake detectors. Moreover, we make use of facial landmarks to extract and weigh the contribution of each part of the image. Furthermore, we introduce an application scenario of predictions analysis. Results show that our objective is not easy to achieve and that there is still a visible distance in prediction produced by deepfake detectors. However, we find out that some parts of the image give more contribution in predictions like the jaw of the face and surprisingly also the background of the image.

Over the last few years, rapid advances in deep learning (DL) result in new techniques and tools that have recently drawn a lot of interest as new effective approaches to provide solutions in several study domains. Though the technology is primarily employed for legitimate applications such as for autonomous vehicles and entertainment, malicious users have also exploited them for unlawful or wicked purposes. For example, high-quality and realistic fake videos, images, or audios have been generated to diffuse misinformation and propaganda or even foment political discord and hate, making a public figure a convenient target for these types of forgeries. This manipulated content has become known recently as deepfake. Therefore, there is a need to identify such samples so automated solutions employed by DL's techniques can be an effective method. The DL systems black-box nature allows for robust predictions but they cannot be completely trusted. DL's current inability to explain its own decisions to human users limits the efficacy of these systems. The fundamental issue is gaining the trust of human agents so the construction of interpretable and also easily explainable solutions is imperative. In this work, we investigate some explainable artificial intelligence (XAI) approaches in order to improve our ability to interpret deepfakes and the deepfake detection process. In particular, we conduct studies, in a classification scenario of fake and real face images, regarding deepfake detectors and their correlations in prediction. Our goal is better understand if different deepfake detection models use the same informations to take a decision in a classification context. By doing so, we want to understand what part of images deepfake detectors use for classification and if different deepfake detectors take similar parts of samples into account for the same classification. In order to achieve our goal, we employ some analysis on the coherence of different deepfake detectors. Moreover, we make use of facial landmarks to extract and weigh the contribution of each part of the image. Furthermore, we introduce an application scenario of predictions analysis. Results show that our objective is not easy to achieve and that there is still a visible distance in prediction produced by deepfake detectors. However, we find out that some parts of the image give more contribution in predictions like the jaw of the face and surprisingly also the background of the image.

Toward Explainable deepfake detectors

LISSANDRON, FRANCESCO
2021/2022

Abstract

Over the last few years, rapid advances in deep learning (DL) result in new techniques and tools that have recently drawn a lot of interest as new effective approaches to provide solutions in several study domains. Though the technology is primarily employed for legitimate applications such as for autonomous vehicles and entertainment, malicious users have also exploited them for unlawful or wicked purposes. For example, high-quality and realistic fake videos, images, or audios have been generated to diffuse misinformation and propaganda or even foment political discord and hate, making a public figure a convenient target for these types of forgeries. This manipulated content has become known recently as deepfake. Therefore, there is a need to identify such samples so automated solutions employed by DL's techniques can be an effective method. The DL systems black-box nature allows for robust predictions but they cannot be completely trusted. DL's current inability to explain its own decisions to human users limits the efficacy of these systems. The fundamental issue is gaining the trust of human agents so the construction of interpretable and also easily explainable solutions is imperative. In this work, we investigate some explainable artificial intelligence (XAI) approaches in order to improve our ability to interpret deepfakes and the deepfake detection process. In particular, we conduct studies, in a classification scenario of fake and real face images, regarding deepfake detectors and their correlations in prediction. Our goal is better understand if different deepfake detection models use the same informations to take a decision in a classification context. By doing so, we want to understand what part of images deepfake detectors use for classification and if different deepfake detectors take similar parts of samples into account for the same classification. In order to achieve our goal, we employ some analysis on the coherence of different deepfake detectors. Moreover, we make use of facial landmarks to extract and weigh the contribution of each part of the image. Furthermore, we introduce an application scenario of predictions analysis. Results show that our objective is not easy to achieve and that there is still a visible distance in prediction produced by deepfake detectors. However, we find out that some parts of the image give more contribution in predictions like the jaw of the face and surprisingly also the background of the image.
2021
Toward Explainable deepfake detectors
Over the last few years, rapid advances in deep learning (DL) result in new techniques and tools that have recently drawn a lot of interest as new effective approaches to provide solutions in several study domains. Though the technology is primarily employed for legitimate applications such as for autonomous vehicles and entertainment, malicious users have also exploited them for unlawful or wicked purposes. For example, high-quality and realistic fake videos, images, or audios have been generated to diffuse misinformation and propaganda or even foment political discord and hate, making a public figure a convenient target for these types of forgeries. This manipulated content has become known recently as deepfake. Therefore, there is a need to identify such samples so automated solutions employed by DL's techniques can be an effective method. The DL systems black-box nature allows for robust predictions but they cannot be completely trusted. DL's current inability to explain its own decisions to human users limits the efficacy of these systems. The fundamental issue is gaining the trust of human agents so the construction of interpretable and also easily explainable solutions is imperative. In this work, we investigate some explainable artificial intelligence (XAI) approaches in order to improve our ability to interpret deepfakes and the deepfake detection process. In particular, we conduct studies, in a classification scenario of fake and real face images, regarding deepfake detectors and their correlations in prediction. Our goal is better understand if different deepfake detection models use the same informations to take a decision in a classification context. By doing so, we want to understand what part of images deepfake detectors use for classification and if different deepfake detectors take similar parts of samples into account for the same classification. In order to achieve our goal, we employ some analysis on the coherence of different deepfake detectors. Moreover, we make use of facial landmarks to extract and weigh the contribution of each part of the image. Furthermore, we introduce an application scenario of predictions analysis. Results show that our objective is not easy to achieve and that there is still a visible distance in prediction produced by deepfake detectors. However, we find out that some parts of the image give more contribution in predictions like the jaw of the face and surprisingly also the background of the image.
deepfake
explainability
deepfake detectors
xai
File in questo prodotto:
File Dimensione Formato  
Lissandron_Francesco.pdf

accesso riservato

Dimensione 6.92 MB
Formato Adobe PDF
6.92 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/42056