Artificial intelligence is rapidly reshaping defence and national security domains. Yet, its speed, opacity, and dual-use character all outpace both legal doctrine and oversight, which raises many significant implications for fundamental rights (such as the right to privacy, equality, due process, and life). This thesis adopts a human-rights and multi-level governance lens to examine how international, regional, and national frameworks address these risks across the AI lifecycle. It poses three questions: which security and military applications most engage human rights and through what mechanisms; which governance instruments and oversight processes mitigate these risks under operational constraints; and which design-level and process-level controls can operationalise the compliance of AI stakeholders with human rights over time. It synthesises a set of core human rights risk mechanisms, including bias and discrimination, the loss of human control, opacity and secrecy (or the “double black box” problem), surveillance-driven interferences, and error cascades. It also maps the polycentric but fragmented regime complex of guidelines, laws, standards, and procurement levers oriented to ensure accountability, auditability, and traceability throughout the AI lifecycle.

Artificial intelligence is rapidly reshaping defence and national security domains. Yet, its speed, opacity, and dual-use character all outpace both legal doctrine and oversight, which raises many significant implications for fundamental rights (such as the right to privacy, equality, due process, and life). This thesis adopts a human-rights and multi-level governance lens to examine how international, regional, and national frameworks address these risks across the AI lifecycle. It poses three questions: which security and military applications most engage human rights and through what mechanisms; which governance instruments and oversight processes mitigate these risks under operational constraints; and which design-level and process-level controls can operationalise the compliance of AI stakeholders with human rights over time. It synthesises a set of core human rights risk mechanisms, including bias and discrimination, the loss of human control, opacity and secrecy (or the “double black box” problem), surveillance-driven interferences, and error cascades. It also maps the polycentric but fragmented regime complex of guidelines, laws, standards, and procurement levers oriented to ensure accountability, auditability, and traceability throughout the AI lifecycle.

Automated Risk, Human Cost: Legal and Ethical Considerations of Artificial Intelligence in Military and Security Domains

AMEZGAR, OUMAIMA
2024/2025

Abstract

Artificial intelligence is rapidly reshaping defence and national security domains. Yet, its speed, opacity, and dual-use character all outpace both legal doctrine and oversight, which raises many significant implications for fundamental rights (such as the right to privacy, equality, due process, and life). This thesis adopts a human-rights and multi-level governance lens to examine how international, regional, and national frameworks address these risks across the AI lifecycle. It poses three questions: which security and military applications most engage human rights and through what mechanisms; which governance instruments and oversight processes mitigate these risks under operational constraints; and which design-level and process-level controls can operationalise the compliance of AI stakeholders with human rights over time. It synthesises a set of core human rights risk mechanisms, including bias and discrimination, the loss of human control, opacity and secrecy (or the “double black box” problem), surveillance-driven interferences, and error cascades. It also maps the polycentric but fragmented regime complex of guidelines, laws, standards, and procurement levers oriented to ensure accountability, auditability, and traceability throughout the AI lifecycle.
2024
Automated Risk, Human Cost: Legal and Ethical Considerations of Artificial Intelligence in Military and Security Domains
Artificial intelligence is rapidly reshaping defence and national security domains. Yet, its speed, opacity, and dual-use character all outpace both legal doctrine and oversight, which raises many significant implications for fundamental rights (such as the right to privacy, equality, due process, and life). This thesis adopts a human-rights and multi-level governance lens to examine how international, regional, and national frameworks address these risks across the AI lifecycle. It poses three questions: which security and military applications most engage human rights and through what mechanisms; which governance instruments and oversight processes mitigate these risks under operational constraints; and which design-level and process-level controls can operationalise the compliance of AI stakeholders with human rights over time. It synthesises a set of core human rights risk mechanisms, including bias and discrimination, the loss of human control, opacity and secrecy (or the “double black box” problem), surveillance-driven interferences, and error cascades. It also maps the polycentric but fragmented regime complex of guidelines, laws, standards, and procurement levers oriented to ensure accountability, auditability, and traceability throughout the AI lifecycle.
Human Rights
Artificial Intellige
Human Oversight
Defence and Security
AI Governance
File in questo prodotto:
File Dimensione Formato  
AMEZGAR_OUMAIMA.pdf

Accesso riservato

Dimensione 3.08 MB
Formato Adobe PDF
3.08 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/95761