As deep learning models continue to demonstrate unprecedented performance across various domains, the interpretability of these models becomes a critical concern. Deep learning models are often recognized for their proficiency in addressing statistical problems rather than excelling in calculations or processing symbolic data. This thesis explores the potential of incorporating symbolic representations, aiming to enhance the transparency and interpretability of deep learning models.
As deep learning models continue to demonstrate unprecedented performance across various domains, the interpretability of these models becomes a critical concern. Deep learning models are often recognized for their proficiency in addressing statistical problems rather than excelling in calculations or processing symbolic data. This thesis explores the potential of incorporating symbolic representations, aiming to enhance the transparency and interpretability of deep learning models.
Interpretability Challenges in Deep Learning: A Focus on Symbolic Representation
HAKIMINEJAD, SEPASEH
2023/2024
Abstract
As deep learning models continue to demonstrate unprecedented performance across various domains, the interpretability of these models becomes a critical concern. Deep learning models are often recognized for their proficiency in addressing statistical problems rather than excelling in calculations or processing symbolic data. This thesis explores the potential of incorporating symbolic representations, aiming to enhance the transparency and interpretability of deep learning models.File | Dimensione | Formato | |
---|---|---|---|
2041592-Sepaseh-Hakiminejad.pdf
accesso aperto
Dimensione
2.67 MB
Formato
Adobe PDF
|
2.67 MB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/70907