Research on Deep Learning has achieved remarkable results in recent years, mainly thanks to the computing power of modern computers and the increasing availability of large data sets. However, deep neural models are universally considered black boxes: they employ sub-symbolic representations of knowledge, which are inherently opaque to human beings trying to derive explanations. In this work, we first give a survey on the research field of Explainable AI, providing more rigorous definitions of the concepts of interpretability and explainability. We then delve deeper in the research field of Neural Symbolic Integration, which tackles the task of integrating the statistical learning power of machine learning with the symbolic and abstract world of logic. Specifically, we analyze Knowledge Enhanced Neural Networks, a special kind of residual layer for neural architectures which makes it possible to inject symbolic logical knowledge inside a neural network. We describe and analyze experimental results on relational data on the task of collective classification, and study how KENN is able to automatically learn the importance of logical rules from the training data. We finally review explainability methods for KENN, proposing ways to extract explanations for the predictions provided by the model.

Towards explainability in knowledge enhanced neural networks

Mazzieri, Riccardo
2021/2022

Abstract

Research on Deep Learning has achieved remarkable results in recent years, mainly thanks to the computing power of modern computers and the increasing availability of large data sets. However, deep neural models are universally considered black boxes: they employ sub-symbolic representations of knowledge, which are inherently opaque to human beings trying to derive explanations. In this work, we first give a survey on the research field of Explainable AI, providing more rigorous definitions of the concepts of interpretability and explainability. We then delve deeper in the research field of Neural Symbolic Integration, which tackles the task of integrating the statistical learning power of machine learning with the symbolic and abstract world of logic. Specifically, we analyze Knowledge Enhanced Neural Networks, a special kind of residual layer for neural architectures which makes it possible to inject symbolic logical knowledge inside a neural network. We describe and analyze experimental results on relational data on the task of collective classification, and study how KENN is able to automatically learn the importance of logical rules from the training data. We finally review explainability methods for KENN, proposing ways to extract explanations for the predictions provided by the model.
2021-09-22
79
explainability, neural networks, Knowledge
File in questo prodotto:
File Dimensione Formato  
tesi_Mazzieri.pdf

accesso aperto

Dimensione 1.56 MB
Formato Adobe PDF
1.56 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/21616