This thesis addresses the dogmatic and systemic tensions generated by artificial intelligence within the domain of criminal law. The intrusion of the artificial agent into the causal chain of the offence—often in the absence of direct human intervention—prompts a reconsideration of the traditional categories of attribution, culpability, and subjective imputability. The investigation begins with an analysis of the evolution of cybercrime and the initial normative responses, both national and international, with particular reference to the Budapest Convention. A targeted focus is dedicated to the AI Act, highlighting, on the one hand, its anthropocentric approach—aimed at ensuring human primacy in the control of intelligent systems—and, on the other, the risk-based logic underpinning Regulation (EU) 1689/2024. Theoretical and practical implications of artificial intelligence in the criminal law domain are further explored, with attention to the programmer’s liability, algorithmic autonomy, and the difficulties in subsuming automated conduct within the classical structures of criminal theory. The analysis also addresses specific criminal offences—such as defamation—and the structural transformation these undergo in the age of artificial intelligence, particularly through the use of so-called deepfake technologies. Finally, particular attention is devoted to the phenomenon of self-driving cars, with an analysis of the national and European regulatory framework currently under development, and a focus on the criminal law implications arising from the use of automated systems within the causal dynamics of harmful events.
Il presente elaborato affronta le tensioni dogmatiche e sistemiche generate dall’intelligenza artificiale nell’ambito del diritto penale. L’irruzione dell’agente artificiale all’interno del circuito causale dell’illecito, spesso in assenza di un intervento umano diretto, sollecita un ripensamento delle categorie tradizionali di imputazione, colpevolezza e riferibilità soggettiva. L’indagine muove dall’analisi dell’evoluzione della criminalità informatica e delle prime risposte normative, nazionali e sovranazionali, con particolare riferimento alla Convenzione di Budapest e dedicando un approfondimento mirato sull'AI Act, evidenziando da un lato, l’approccio antropocentrico, volto a garantire il primato dell’uomo nel controllo dei sistemi intelligenti; dall’altro la logica risk-based, che ha ispirato il regolamento 1689/2024. Vengono approfondite le implicazioni teoriche e applicative dell’intelligenza artificiale in ambito penalistico, soffermandosi sulla responsabilità del programmatore di sistemi di IA, sull’autonomia algoritmica e sulle difficoltà di sussumere condotte automatizzate nei modelli classici della teoria del reato. Ci si soffermerà sull'analisi di taluni istituti di natura penale, quali la diffamazione, e il mutamento che tale reato assume nell'era dell'intelligenza artificiale mediante l'impiego delle tecnologie c.d. deepfake. A chiusura dell’indagine, particolare attenzione è dedicata al fenomeno delle self-driving cars, con un’analisi del quadro normativo nazionale ed europeo attualmente in via di definizione, dando evidenza delle implicazioni penalistiche connesse all’impiego di sistemi automatizzati nella dinamica causale dell’evento lesivo.
INTELLIGENZA ARTIFICIALE E DIRITTO PENALE: PROFILI PROBLEMATICI
CUNEO, ALESSANDRO
2024/2025
Abstract
This thesis addresses the dogmatic and systemic tensions generated by artificial intelligence within the domain of criminal law. The intrusion of the artificial agent into the causal chain of the offence—often in the absence of direct human intervention—prompts a reconsideration of the traditional categories of attribution, culpability, and subjective imputability. The investigation begins with an analysis of the evolution of cybercrime and the initial normative responses, both national and international, with particular reference to the Budapest Convention. A targeted focus is dedicated to the AI Act, highlighting, on the one hand, its anthropocentric approach—aimed at ensuring human primacy in the control of intelligent systems—and, on the other, the risk-based logic underpinning Regulation (EU) 1689/2024. Theoretical and practical implications of artificial intelligence in the criminal law domain are further explored, with attention to the programmer’s liability, algorithmic autonomy, and the difficulties in subsuming automated conduct within the classical structures of criminal theory. The analysis also addresses specific criminal offences—such as defamation—and the structural transformation these undergo in the age of artificial intelligence, particularly through the use of so-called deepfake technologies. Finally, particular attention is devoted to the phenomenon of self-driving cars, with an analysis of the national and European regulatory framework currently under development, and a focus on the criminal law implications arising from the use of automated systems within the causal dynamics of harmful events.| File | Dimensione | Formato | |
|---|---|---|---|
|
Cuneo_Alessandro.pdf
Accesso riservato
Dimensione
1.64 MB
Formato
Adobe PDF
|
1.64 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/84409