The thesis research project will focus on deepfake technology, specifically targeting the creation of "talking heads" videos. This investigation addresses a broad, interdisciplinary scope, examining state-of-the-art techniques, data requirements, and computational burdens, which are explored by implementing two available approaches for talking head video generation using the best two methods for each approach. The research activity also includes a critical review and discussion of the current legal and ethical considerations concerning risks and regulations of this technology. The empirical investigation of deepfake generation is divided into two main approaches. The first involves generating a talking-head video by swapping a source face onto a target video. The second involves creating a talking-head video directly from an audio input and a single source image or video, producing speech that matches the audio. This latter approach is explored to evaluate its potential applicability in other domains as well.

The thesis research project will focus on deepfake technology, specifically targeting the creation of "talking heads" videos. This investigation addresses a broad, interdisciplinary scope, examining state-of-the-art techniques, data requirements, and computational burdens, which are explored by implementing two available approaches for talking head video generation using the best two methods for each approach. The research activity also includes a critical review and discussion of the current legal and ethical considerations concerning risks and regulations of this technology. The empirical investigation of deepfake generation is divided into two main approaches. The first involves generating a talking-head video by swapping a source face onto a target video. The second involves creating a talking-head video directly from an audio input and a single source image or video, producing speech that matches the audio. This latter approach is explored to evaluate its potential applicability in other domains as well.

An empirical analysis of software tools for the creation of realistic deepfake videos

SOUFEH, SAEED
2024/2025

Abstract

The thesis research project will focus on deepfake technology, specifically targeting the creation of "talking heads" videos. This investigation addresses a broad, interdisciplinary scope, examining state-of-the-art techniques, data requirements, and computational burdens, which are explored by implementing two available approaches for talking head video generation using the best two methods for each approach. The research activity also includes a critical review and discussion of the current legal and ethical considerations concerning risks and regulations of this technology. The empirical investigation of deepfake generation is divided into two main approaches. The first involves generating a talking-head video by swapping a source face onto a target video. The second involves creating a talking-head video directly from an audio input and a single source image or video, producing speech that matches the audio. This latter approach is explored to evaluate its potential applicability in other domains as well.
2024
An empirical analysis of software tools for the creation of realistic deepfake videos
The thesis research project will focus on deepfake technology, specifically targeting the creation of "talking heads" videos. This investigation addresses a broad, interdisciplinary scope, examining state-of-the-art techniques, data requirements, and computational burdens, which are explored by implementing two available approaches for talking head video generation using the best two methods for each approach. The research activity also includes a critical review and discussion of the current legal and ethical considerations concerning risks and regulations of this technology. The empirical investigation of deepfake generation is divided into two main approaches. The first involves generating a talking-head video by swapping a source face onto a target video. The second involves creating a talking-head video directly from an audio input and a single source image or video, producing speech that matches the audio. This latter approach is explored to evaluate its potential applicability in other domains as well.
Deepfake
Generative AI
Video
File in questo prodotto:
File Dimensione Formato  
Soufeh_Saeed.pdf

Accesso riservato

Dimensione 31.18 MB
Formato Adobe PDF
31.18 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/102138