As a result of the urgent need for explainable decisions in AI, several techniques have been developed over time. This thesis reviews the most meaningful approaches, with an emphasis on their application in the legal system, emphasizing argumentation as the most suitable explainability method. It also includes a section that compares deep learning to supervised classical AI, highlighting the differences and implications for explainability in legal AI systems; and a section about the legal framework concerning explainability in Artificial Intelligence in order to ensure that future systems provide reliable, accurate and impartial decision support, with convincing arguments to back up machine judgements.

Artificial Intelligence and Law: Computational Models of Argument and Explainability Solutions

PRAJESCU, ANDRADA IULIA
2024/2025

Abstract

As a result of the urgent need for explainable decisions in AI, several techniques have been developed over time. This thesis reviews the most meaningful approaches, with an emphasis on their application in the legal system, emphasizing argumentation as the most suitable explainability method. It also includes a section that compares deep learning to supervised classical AI, highlighting the differences and implications for explainability in legal AI systems; and a section about the legal framework concerning explainability in Artificial Intelligence in order to ensure that future systems provide reliable, accurate and impartial decision support, with convincing arguments to back up machine judgements.
2024
Artificial Intelligence and Law: Computational Models of Argument and Explainability Solutions
AI and Law
Right to explanation
Argumentation
Machine learning
File in questo prodotto:
File Dimensione Formato  
Prajescu_AndradaIulia.pdf

accesso riservato

Dimensione 1.72 MB
Formato Adobe PDF
1.72 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/84858