The continuous expansion of the use of artificial intelligence is raising multiple concerns regarding algorithmic discrimination. The aim of this thesis is to explore how algorithms can perpetuate or amplify existing biases, compromising civil and social rights. Starting with an analysis of the concepts of artificial intelligence and machine learning, it examines how bias can manifest in data and models, leading to unfair decisions that may affect vulnerable groups of people. Subsequently, the existing regulatory framework is reviewed, with a focus on the GDPR and the proposed European AI regulation. This thesis then analyses the effectiveness of bias mitigation techniques and transparency practices, such as explainability and algorithm registers. Finally, the thesis proposes both legal and technical solutions to promote algorithmic transparency. It concludes with an international comparative analysis, comparing the various measures adopted by different countries to combat algorithmic discrimination, emphasizing the importance of an approach that integrates legal and technological strategies to ensure a more equitable and fair society with AI.
La continua espansione dell’uso dell'intelligenza artificiale sta sollevando molteplici preoccupazioni riguardo alla discriminazione algoritmica. L’obiettivo di questa tesi mira ad esplorare come gli algoritmi possano perpetuare o amplificare pregiudizi esistenti, andando ad intaccare i diritti civili e sociali. Partendo dall’analisi dei concetti di intelligenza artificiale e machine learning, viene esplorato come la discriminazione possa manifestarsi nei dati e nei modelli, portando a decisioni scorrette che possono colpire gruppi vulnerabili. Successivamente, viene esaminato il quadro normativo esistente, con un focus sul GDPR e sulla proposta di regolamento europeo sull’intelligenza artificiale. Viene poi analizzata l'efficacia delle tecniche di mitigazione del bias e delle pratiche di trasparenza come l'explainability ed i registri degli algoritmi. Infine, la tesi propone soluzioni sia normative che tecniche per promuovere la trasparenza algoritmica, terminando con un'analisi comparativa internazionale, in cui vengono messi a confronto i vari strumenti adottati da alcuni Paesi per combattere la discriminazione algoritmica, sottolineando l’importanza di un approccio che integri le strategie legali e tecnologiche per garantire una società in cui l’intelligenza artificiale sia più equa e giusta.
Equità Algoritmica: Affrontare la Discriminazione nell’Intelligenza artificiale
AGAZZI, GIULIA
2023/2024
Abstract
The continuous expansion of the use of artificial intelligence is raising multiple concerns regarding algorithmic discrimination. The aim of this thesis is to explore how algorithms can perpetuate or amplify existing biases, compromising civil and social rights. Starting with an analysis of the concepts of artificial intelligence and machine learning, it examines how bias can manifest in data and models, leading to unfair decisions that may affect vulnerable groups of people. Subsequently, the existing regulatory framework is reviewed, with a focus on the GDPR and the proposed European AI regulation. This thesis then analyses the effectiveness of bias mitigation techniques and transparency practices, such as explainability and algorithm registers. Finally, the thesis proposes both legal and technical solutions to promote algorithmic transparency. It concludes with an international comparative analysis, comparing the various measures adopted by different countries to combat algorithmic discrimination, emphasizing the importance of an approach that integrates legal and technological strategies to ensure a more equitable and fair society with AI.File | Dimensione | Formato | |
---|---|---|---|
Giulia_Agazzi.pdf
accesso riservato
Dimensione
2.24 MB
Formato
Adobe PDF
|
2.24 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/72741