This thesis delves into the domain of Multimodal Federated Learning, specifically exploring the integration of RGB and Depth camera data within the autonomous driving scenario. Leveraging the Federated Learning paradigm, which enables collaborative model training across decentralized devices without compromising data privacy, the research investigates various data type combinations, including RGB-only, Depth-only, and the fusion of RGB and Depth. The experimentation involves the utilization of the Cityscapes dataset in its original RGB format and an augmented version incorporating Depth camera data. The study employs an encoder-decoder architecture, featuring MobileNet-v2 and Deeplabv3. The diverse modalities offer unique insights into the algorithm's ability to learn and generalize across different data types. The findings contribute to advancing our understanding of Multimodal Federated Learning applications in autonomous driving scenarios, showcasing the potential for enhanced performance and adaptability in real-world environments.

This thesis delves into the domain of Multimodal Federated Learning, specifically exploring the integration of RGB and Depth camera data within the autonomous driving scenario. Leveraging the Federated Learning paradigm, which enables collaborative model training across decentralized devices without compromising data privacy, the research investigates various data type combinations, including RGB-only, Depth-only, and the fusion of RGB and Depth. The experimentation involves the utilization of the Cityscapes dataset in its original RGB format and an augmented version incorporating Depth camera data. The study employs an encoder-decoder architecture, featuring MobileNet-v2 and Deeplabv3. The diverse modalities offer unique insights into the algorithm's ability to learn and generalize across different data types. The findings contribute to advancing our understanding of Multimodal Federated Learning applications in autonomous driving scenarios, showcasing the potential for enhanced performance and adaptability in real-world environments.

Multimodal Federated Learning In The Autonomous Driving Scenario

ALGUN, MUSTAFA
2023/2024

Abstract

This thesis delves into the domain of Multimodal Federated Learning, specifically exploring the integration of RGB and Depth camera data within the autonomous driving scenario. Leveraging the Federated Learning paradigm, which enables collaborative model training across decentralized devices without compromising data privacy, the research investigates various data type combinations, including RGB-only, Depth-only, and the fusion of RGB and Depth. The experimentation involves the utilization of the Cityscapes dataset in its original RGB format and an augmented version incorporating Depth camera data. The study employs an encoder-decoder architecture, featuring MobileNet-v2 and Deeplabv3. The diverse modalities offer unique insights into the algorithm's ability to learn and generalize across different data types. The findings contribute to advancing our understanding of Multimodal Federated Learning applications in autonomous driving scenarios, showcasing the potential for enhanced performance and adaptability in real-world environments.
2023
Multimodal Federated Learning In The Autonomous Driving Scenario
This thesis delves into the domain of Multimodal Federated Learning, specifically exploring the integration of RGB and Depth camera data within the autonomous driving scenario. Leveraging the Federated Learning paradigm, which enables collaborative model training across decentralized devices without compromising data privacy, the research investigates various data type combinations, including RGB-only, Depth-only, and the fusion of RGB and Depth. The experimentation involves the utilization of the Cityscapes dataset in its original RGB format and an augmented version incorporating Depth camera data. The study employs an encoder-decoder architecture, featuring MobileNet-v2 and Deeplabv3. The diverse modalities offer unique insights into the algorithm's ability to learn and generalize across different data types. The findings contribute to advancing our understanding of Multimodal Federated Learning applications in autonomous driving scenarios, showcasing the potential for enhanced performance and adaptability in real-world environments.
Federated Learning
Autonomous Driving
Deep Learning
Computer Vision
Machine Learning
File in questo prodotto:
File Dimensione Formato  
Algun_Mustafa.pdf

accesso aperto

Dimensione 12.92 MB
Formato Adobe PDF
12.92 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/64493