The EU and many other countries have agreed that AI poses catastrophic risks (Stacey and Milmo, 2023), and a majority of AI researchers surveyed indicated that there is a 1 in 20 chance of existential risk from AI (Nolan, 2024). Nevertheless, it is not necessary to entertain a robot takeover or other scenarios associated with existential or catastrophic risk to recognize the several potential dangers of AI, including bias, cognitive overload, misinformation, and dependency, as evidenced by research both on AI and on technologies (e.g., social media) with similar characteristics. The project will explore a minimal and relatively inexpensive potential intervention, user licensing based on AI literacy, to reduce the risks (e.g., bias, cognitive overload, misinformation, and dependency) of AI to society and to the individual. Reducing risk through regulation requires informed policymaking and political will, which itself is often dependent upon public support. Given the ostensible benefits of AI platforms for users, the public is unlikely to support regulation without knowledge also of AI’s dangers, and users should be aware of the potential psychological and other harms. As AI literacy is limited in school curricula globally and rarely covers the societal implications of AI, curricular emphasis on the latter is integral to public safety and security. User licensing, thus, will be considered as a means for ensuring that citizens are educated about the benefits and dangers of AI, enabling informed decision-making about their use of AI and their policy preferences.
The EU and many other countries have agreed that AI poses catastrophic risks (Stacey and Milmo, 2023), and a majority of AI researchers surveyed indicated that there is a 1 in 20 chance of existential risk from AI (Nolan, 2024). Nevertheless, it is not necessary to entertain a robot takeover or other scenarios associated with existential or catastrophic risk to recognize the several potential dangers of AI, including bias, cognitive overload, misinformation, and dependency, as evidenced by research both on AI and on technologies (e.g., social media) with similar characteristics. The project will explore a minimal and relatively inexpensive potential intervention, user licensing based on AI literacy, to reduce the risks (e.g., bias, cognitive overload, misinformation, and dependency) of AI to society and to the individual. Reducing risk through regulation requires informed policymaking and political will, which itself is often dependent upon public support. Given the ostensible benefits of AI platforms for users, the public is unlikely to support regulation without knowledge also of AI’s dangers, and users should be aware of the potential psychological and other harms. As AI literacy is limited in school curricula globally and rarely covers the societal implications of AI, curricular emphasis on the latter is integral to public safety and security. User licensing, thus, will be considered as a means for ensuring that citizens are educated about the benefits and dangers of AI, enabling informed decision-making about their use of AI and their policy preferences.
The Benefits and Harms of Artificial Intelligence: An Education-Based Policy Approach to Reduce Individual and Societal Risks
COX, PATRICK ALEXANDER
2024/2025
Abstract
The EU and many other countries have agreed that AI poses catastrophic risks (Stacey and Milmo, 2023), and a majority of AI researchers surveyed indicated that there is a 1 in 20 chance of existential risk from AI (Nolan, 2024). Nevertheless, it is not necessary to entertain a robot takeover or other scenarios associated with existential or catastrophic risk to recognize the several potential dangers of AI, including bias, cognitive overload, misinformation, and dependency, as evidenced by research both on AI and on technologies (e.g., social media) with similar characteristics. The project will explore a minimal and relatively inexpensive potential intervention, user licensing based on AI literacy, to reduce the risks (e.g., bias, cognitive overload, misinformation, and dependency) of AI to society and to the individual. Reducing risk through regulation requires informed policymaking and political will, which itself is often dependent upon public support. Given the ostensible benefits of AI platforms for users, the public is unlikely to support regulation without knowledge also of AI’s dangers, and users should be aware of the potential psychological and other harms. As AI literacy is limited in school curricula globally and rarely covers the societal implications of AI, curricular emphasis on the latter is integral to public safety and security. User licensing, thus, will be considered as a means for ensuring that citizens are educated about the benefits and dangers of AI, enabling informed decision-making about their use of AI and their policy preferences.File | Dimensione | Formato | |
---|---|---|---|
COX_PATRICK.pdf
accesso aperto
Dimensione
756.57 kB
Formato
Adobe PDF
|
756.57 kB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/84002