Artificial intelligence is now a pervasive influence in everyday life and, in particular, in the workplace. The use of intelligent systems and decision-making algorithms is becoming increasingly consolidated, significantly affecting all stages of the employment relationship: from recruitment and personnel management to termination of contract. Despite being presented as technologically advanced and apparently neutral, these tools can in fact involve forms of discrimination that are often difficult to detect and even more difficult to prove. In view of the numerous concrete cases of discrimination that have occurred in employment relationships, there is a need to question the effectiveness of the protections provided by national and European law. This leads to the central question of the research: to verify if the current regulatory framework is really capable of addressing new forms of algorithmic discrimination and ensuring effective protection for workers. The thesis aims, first and foremost, to conduct a study of the regulatory framework with particular reference to the GDPR and the AI Act, analysing the implications of automated decisions in personnel selection, evaluation and management processes, reporting concrete cases such as the Deliveroo case and the Amazon case. It also focuses on analysing the functioning of artificial intelligence systems, which reveals the risk of direct and indirect discrimination resulting from biases inherent in the data and algorithmic logic that guide automated decisions. Based on this initial analysis, the thesis highlights the responsibilities of employers and other parties involved, focusing in particular on risk prevention duties, joint and several liability, the obligation to monitor suppliers and developers, and the distribution of the burden of proof. In light of this, the thesis concludes by analysing regulatory gaps and evaluating possible additions to the regulations, considering the presence of relevant figures such as the Data Protection Authority, the High-level Expert Group, and the EDPB.
L’intelligenza artificiale rappresenta ormai un elemento di influenza pervasiva nella vita quotidiana e, in particolare, nel contesto lavorativo. L’utilizzo di sistemi intelligenti e di algoritmi decisionali si sta progressivamente consolidando, incidendo in modo significativo su tutte le fasi del rapporto di lavoro: dalla selezione e gestione del personale, fino alla cessazione del contratto. Tali strumenti, pur presentati come tecnologicamente avanzati e apparentemente neutrali, possono in realtà veicolare forme di discriminazione spesso difficili da rilevare e ancor più da dimostrare sul piano probatorio. A fronte dei numerosi casi concreti di discriminazioni verificatisi nei rapporti di lavoro, emerge la necessità di interrogarsi sull’effettiva adeguatezza delle tutele previste dal diritto nazionale ed europeo. Ne deriva, dunque, la questione centrale della ricerca: verificare se l’attuale apparato normativo sia realmente in grado di fronteggiare le nuove forme di discriminazione algoritmica e garantire una protezione efficace dei lavoratori. La tesi si propone di svolgere, in primo luogo, uno studio dell’inquadramento normativo con particolare riferimento al GDPR e all’AI Act, analizzando il risvolto delle decisioni automatizzate nei processi di selezione, valutazione e gestione del personale riportando casistiche concrete quali il caso Deliveroo e il caso Amazon. Ci si concentra anche sull’analisi del funzionamento dei sistemi di intelligenza artificiale che rivela il rischio di discriminazioni dirette e indirette, derivanti dai bias insiti nei dati e nelle logiche algoritmiche che guidano le decisioni automatizzate. Da questa prima analisi, la tesi mette in evidenza i profili di responsabilità del datore di lavoro e degli altri soggetti coinvolti trattando in particolar modo dei doveri di prevenzione del rischio, della responsabilità solidale, dell’obbligo di controllo verso fornitori e sviluppatori e del riparto dell’onere probatorio. Alla luce di ciò, la tesi conclude analizzando le lacune normative e valutando possibili integrazioni della disciplina, considerando la presenza di figure rilevanti quali l’Autorità garante per la protezione dei dati, l’High-level Expert Group e l’EDPB.
La responsabilità del datore di lavoro in caso di discriminazioni algoritmiche
CHIARELLO, ANNA
2024/2025
Abstract
Artificial intelligence is now a pervasive influence in everyday life and, in particular, in the workplace. The use of intelligent systems and decision-making algorithms is becoming increasingly consolidated, significantly affecting all stages of the employment relationship: from recruitment and personnel management to termination of contract. Despite being presented as technologically advanced and apparently neutral, these tools can in fact involve forms of discrimination that are often difficult to detect and even more difficult to prove. In view of the numerous concrete cases of discrimination that have occurred in employment relationships, there is a need to question the effectiveness of the protections provided by national and European law. This leads to the central question of the research: to verify if the current regulatory framework is really capable of addressing new forms of algorithmic discrimination and ensuring effective protection for workers. The thesis aims, first and foremost, to conduct a study of the regulatory framework with particular reference to the GDPR and the AI Act, analysing the implications of automated decisions in personnel selection, evaluation and management processes, reporting concrete cases such as the Deliveroo case and the Amazon case. It also focuses on analysing the functioning of artificial intelligence systems, which reveals the risk of direct and indirect discrimination resulting from biases inherent in the data and algorithmic logic that guide automated decisions. Based on this initial analysis, the thesis highlights the responsibilities of employers and other parties involved, focusing in particular on risk prevention duties, joint and several liability, the obligation to monitor suppliers and developers, and the distribution of the burden of proof. In light of this, the thesis concludes by analysing regulatory gaps and evaluating possible additions to the regulations, considering the presence of relevant figures such as the Data Protection Authority, the High-level Expert Group, and the EDPB.| File | Dimensione | Formato | |
|---|---|---|---|
|
Chiarello_Anna.pdf
accesso aperto
Dimensione
619.94 kB
Formato
Adobe PDF
|
619.94 kB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/101421