This thesis presents some advancements in Artificial Intelligence (AI) applied to Computer Vision regarding the interpretability of Convolutional Neural Networks (CNNs), in particular for critical applications. While CNNs find their way into healthcare, autonomous driving and quality control applications, the need of interpretable and transparent models is growing significantly. The core of this thesis will be related to the methodologies of eXplainable Artificial Intelligence (XAI), particularly the Gradient-weighted Class Activation Mapping (Grad-CAM) technique, that offers the possibility to visualize the areas of an input image that contribute most to a model’s prediction. This work explores the theoretical and practical application of Grad-CAM with a focus on assessing the effectiveness of this technique in improving the interpretability of CNN-based decision processes to end-users and relevant stakeholders. This is reinforced by the comparative analysis with other techniques of interpretability, like LIME and SHAP, which reveals Grad-CAM’s strengths and limitations, particularly in terms of localization accuracy and model-specific insights. Several real-world case studies will demonstrate applications to medical diagnostics, safety in autonomous vehicles and industrial quality control: Grad-CAM has earned its place in enhancing transparency and building trust in AI-driven systems. The study concludes by providing evidence that interpretability remains a key factor in the development of AI, while pointing out some potential future research directions, including the adaptation of interpretability techniques to non-convolutional models to further support the deployment of responsible AI in high stakes environments.
Questa tesi presenta alcuni progressi nell’Intelligenza Artificiale (AI) applicata alla Computer Vision, con un'attenzione particolare all'interpretabilità delle Convolutional Neural Networks (CNNs), specialmente in contesti applicativi particolarmente critici. Man mano che le CNNs trovano impiego in settori come diagnostica medica, guida autonoma e controllo qualità, la necessità di modelli interpretabili e trasparenti sta crescendo significativamente. Il fulcro di questa tesi riguarda le metodologie di eXplainable Artificial Intelligence (XAI), in particolare la tecnica Gradient-weighted Class Activation Mapping (Grad-CAM) che consente di visualizzare le aree di un'immagine in input che contribuiscono maggiormente alla previsione fornita dal modello. Viene esplorata l’applicazione teorica e pratica di Grad-CAM, con un'attenzione particolare alla valutazione dell'efficacia di questa tecnica nel migliorare l'interpretabilità dei processi decisionali delle CNNs per gli utenti finali e i principali stakeholder. Tale analisi è supportata da un confronto con altre tecniche di interpretabilità, come LIME e SHAP, che evidenzia i punti di forza e i limiti di Grad-CAM, in particolare per quanto riguarda la precisione di localizzazione e l'applicabilità in contesti specifici. Vengono poi illustrati diversi casi di studio reali in applicazioni di diagnostica, sicurezza nei veicoli autonomi e controllo qualità industriale: Grad-CAM si è rivelato un metodo fondamentale per migliorare la trasparenza e rafforzare la fiducia nei sistemi guidati dall’AI. Lo studio si conclude fornendo prove che l'interpretabilità resta un elemento chiave nello sviluppo dell'AI, suggerendo al contempo possibili direzioni di ricerca futura, tra cui l'adattamento delle tecniche di interpretabilità a modelli non convoluzionali per supportare ulteriormente l'implementazione responsabile dell’AI in contesti critici.
Artificial Intelligence for Computer Vision
TOFFOLETTO, MARCO
2023/2024
Abstract
This thesis presents some advancements in Artificial Intelligence (AI) applied to Computer Vision regarding the interpretability of Convolutional Neural Networks (CNNs), in particular for critical applications. While CNNs find their way into healthcare, autonomous driving and quality control applications, the need of interpretable and transparent models is growing significantly. The core of this thesis will be related to the methodologies of eXplainable Artificial Intelligence (XAI), particularly the Gradient-weighted Class Activation Mapping (Grad-CAM) technique, that offers the possibility to visualize the areas of an input image that contribute most to a model’s prediction. This work explores the theoretical and practical application of Grad-CAM with a focus on assessing the effectiveness of this technique in improving the interpretability of CNN-based decision processes to end-users and relevant stakeholders. This is reinforced by the comparative analysis with other techniques of interpretability, like LIME and SHAP, which reveals Grad-CAM’s strengths and limitations, particularly in terms of localization accuracy and model-specific insights. Several real-world case studies will demonstrate applications to medical diagnostics, safety in autonomous vehicles and industrial quality control: Grad-CAM has earned its place in enhancing transparency and building trust in AI-driven systems. The study concludes by providing evidence that interpretability remains a key factor in the development of AI, while pointing out some potential future research directions, including the adaptation of interpretability techniques to non-convolutional models to further support the deployment of responsible AI in high stakes environments.File | Dimensione | Formato | |
---|---|---|---|
Toffoletto_Marco.pdf
accesso riservato
Dimensione
10.56 MB
Formato
Adobe PDF
|
10.56 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/76497