This thesis examines the dual role of Artificial Intelligence (AI) and OSINT in identifying and preventing human rights and other international crimes, as well as its potential for misuse and harm. AI, with its capabilities in analyzing satellite images, monitoring social media, and detecting hate speech, provides powerful tools to spot from potential human rights violations, early signs of genocidal actions leading to genocidal acts, verify reports, and act as a deterrent by making these actions public and eventually holding perpetrators accountable. However, besides the opportunities, the thesis also addresses significant challenges of using this technology, including data quality, ethical issues, privacy concerns, and biases in AI systems. The dangers of new technologies and AI being used to target and facilitate violence, as seen in recent conflicts, highlight the need for strong ethical guidelines and international cooperation. Advancements in AI, such as integrating data from multiple sources, developing real-time alert systems, and creating adaptive algorithms, have shown its growing capacity to support the prevention of human rights violation and genocide. The thesis examines how the findings indicate that the effectiveness of these tools is closely linked to the speed and accuracy of legal actions. AI can also enable systematic oppression and banalise the evil (with reference to Hannah Arendt's formula), if not used carefully and guided by strict ethical and legal standards. This perspective shows how new technologies may normalize systematic wrongdoing and even shift responsibility onto machines, which makes criminal accountability more difficult to establish. The dual use of AI and OSINT becomes not only a technical question but also an intersectional issue that cuts across legal, ethical, and international security domains, showing how technology can cause harm while appearing neutral and efficient. Overall, the findings point to the dual nature of AI in this domain, its benefits can be significant, yet its risks are equally profound when governance and accountability mechanisms are insufficient. It also examines how governments and companies share responsibility for AI governance and the challenge of global standards. The thesis links its findings to how AI, OSINT and social media support creative campaigns and evidence-based advocacy while considering protection of human rights defenders.

This thesis examines the dual role of Artificial Intelligence (AI) and OSINT in identifying and preventing human rights and other international crimes, as well as its potential for misuse and harm. AI, with its capabilities in analyzing satellite images, monitoring social media, and detecting hate speech, provides powerful tools to spot from potential human rights violations, early signs of genocidal actions leading to genocidal acts, verify reports, and act as a deterrent by making these actions public and eventually holding perpetrators accountable. However, besides the opportunities, the thesis also addresses significant challenges of using this technology, including data quality, ethical issues, privacy concerns, and biases in AI systems. The dangers of new technologies and AI being used to target and facilitate violence, as seen in recent conflicts, highlight the need for strong ethical guidelines and international cooperation. Advancements in AI, such as integrating data from multiple sources, developing real-time alert systems, and creating adaptive algorithms, have shown its growing capacity to support the prevention of human rights violation and genocide. The thesis examines how the findings indicate that the effectiveness of these tools is closely linked to the speed and accuracy of legal actions. AI can also enable systematic oppression and banalise the evil (with reference to Hannah Arendt's formula), if not used carefully and guided by strict ethical and legal standards. This perspective shows how new technologies may normalize systematic wrongdoing and even shift responsibility onto machines, which makes criminal accountability more difficult to establish. The dual use of AI and OSINT becomes not only a technical question but also an intersectional issue that cuts across legal, ethical, and international security domains, showing how technology can cause harm while appearing neutral and efficient. Overall, the findings point to the dual nature of AI in this domain, its benefits can be significant, yet its risks are equally profound when governance and accountability mechanisms are insufficient. It also examines how governments and companies share responsibility for AI governance and the challenge of global standards. The thesis links its findings to how AI, OSINT and social media support creative campaigns and evidence-based advocacy while considering protection of human rights defenders.

The Dual Use of AI / OSINT: How Technology Reveals the 'Banality of Evil'

SAHIN, MELIKE
2024/2025

Abstract

This thesis examines the dual role of Artificial Intelligence (AI) and OSINT in identifying and preventing human rights and other international crimes, as well as its potential for misuse and harm. AI, with its capabilities in analyzing satellite images, monitoring social media, and detecting hate speech, provides powerful tools to spot from potential human rights violations, early signs of genocidal actions leading to genocidal acts, verify reports, and act as a deterrent by making these actions public and eventually holding perpetrators accountable. However, besides the opportunities, the thesis also addresses significant challenges of using this technology, including data quality, ethical issues, privacy concerns, and biases in AI systems. The dangers of new technologies and AI being used to target and facilitate violence, as seen in recent conflicts, highlight the need for strong ethical guidelines and international cooperation. Advancements in AI, such as integrating data from multiple sources, developing real-time alert systems, and creating adaptive algorithms, have shown its growing capacity to support the prevention of human rights violation and genocide. The thesis examines how the findings indicate that the effectiveness of these tools is closely linked to the speed and accuracy of legal actions. AI can also enable systematic oppression and banalise the evil (with reference to Hannah Arendt's formula), if not used carefully and guided by strict ethical and legal standards. This perspective shows how new technologies may normalize systematic wrongdoing and even shift responsibility onto machines, which makes criminal accountability more difficult to establish. The dual use of AI and OSINT becomes not only a technical question but also an intersectional issue that cuts across legal, ethical, and international security domains, showing how technology can cause harm while appearing neutral and efficient. Overall, the findings point to the dual nature of AI in this domain, its benefits can be significant, yet its risks are equally profound when governance and accountability mechanisms are insufficient. It also examines how governments and companies share responsibility for AI governance and the challenge of global standards. The thesis links its findings to how AI, OSINT and social media support creative campaigns and evidence-based advocacy while considering protection of human rights defenders.
2024
The Dual Use of AI / OSINT: How Technology Reveals the 'Banality of Evil'
This thesis examines the dual role of Artificial Intelligence (AI) and OSINT in identifying and preventing human rights and other international crimes, as well as its potential for misuse and harm. AI, with its capabilities in analyzing satellite images, monitoring social media, and detecting hate speech, provides powerful tools to spot from potential human rights violations, early signs of genocidal actions leading to genocidal acts, verify reports, and act as a deterrent by making these actions public and eventually holding perpetrators accountable. However, besides the opportunities, the thesis also addresses significant challenges of using this technology, including data quality, ethical issues, privacy concerns, and biases in AI systems. The dangers of new technologies and AI being used to target and facilitate violence, as seen in recent conflicts, highlight the need for strong ethical guidelines and international cooperation. Advancements in AI, such as integrating data from multiple sources, developing real-time alert systems, and creating adaptive algorithms, have shown its growing capacity to support the prevention of human rights violation and genocide. The thesis examines how the findings indicate that the effectiveness of these tools is closely linked to the speed and accuracy of legal actions. AI can also enable systematic oppression and banalise the evil (with reference to Hannah Arendt's formula), if not used carefully and guided by strict ethical and legal standards. This perspective shows how new technologies may normalize systematic wrongdoing and even shift responsibility onto machines, which makes criminal accountability more difficult to establish. The dual use of AI and OSINT becomes not only a technical question but also an intersectional issue that cuts across legal, ethical, and international security domains, showing how technology can cause harm while appearing neutral and efficient. Overall, the findings point to the dual nature of AI in this domain, its benefits can be significant, yet its risks are equally profound when governance and accountability mechanisms are insufficient. It also examines how governments and companies share responsibility for AI governance and the challenge of global standards. The thesis links its findings to how AI, OSINT and social media support creative campaigns and evidence-based advocacy while considering protection of human rights defenders.
Advocay
AI
OSINT
Technology
International crimes
File in questo prodotto:
File Dimensione Formato  
Sahin_Melike.pdf

accesso aperto

Dimensione 978.52 kB
Formato Adobe PDF
978.52 kB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/98742