This thesis explores the vulnerability of artificial intelligence (AI) to corruption and cognitive biases, analyzing the dynamics and ethical implications associated with the use of large language models (LLMs) and other advanced AI technologies. Through a review of existing literature and an analysis of experimental tests, the study investigates AI’s potential to replicate human behaviors and influence decisions based on cognitive distortions. The research focuses on how techniques such as machine learning and deep learning have transformed AI, enabling it to interact with users in sophisticated yet not always impartial ways. Emphasizing the role of incentives and decision-making mechanisms within the AI economy, this thesis employs cognitive bias tests and prompt manipulation techniques to assess AI models’ resilience to external influences. The findings reveal that, despite technological advancements, AI models remain susceptible to cognitive biases and manipulations, underscoring a lack of transparency and robustness in critical decision-making contexts. The ethical implications of these findings are significant, as the widespread adoption of AI in sensitive areas such as healthcare, justice, and education demands systems that minimize the risk of distortions. The thesis concludes with recommendations to enhance AI resilience, advocating for a responsible and conscientious approach in the development and deployment of these technologies.
Questa tesi esplora la vulnerabilità dell'intelligenza artificiale (IA) alla corruzione e ai bias cognitivi, analizzando le dinamiche e le implicazioni etiche associate all'uso di modelli di linguaggio di grandi dimensioni (LLM) e altre tecnologie avanzate di IA. Attraverso una revisione della letteratura e l'analisi di test sperimentali, viene indagato il potenziale dell'IA di replicare comportamenti umani e di influenzare le decisioni in base a distorsioni cognitive. La ricerca si concentra su come tecniche come il machine learning e il deep learning abbiano trasformato l'IA, consentendole di interagire con gli utenti in modi sofisticati ma non sempre imparziali. Sottolineando il ruolo degli incentivi e dei meccanismi decisionali nell'economia dell'IA, questa tesi utilizza test su bias cognitivi e tecniche di manipolazione dei prompt per valutare la resilienza dei modelli di IA a influenze esterne. I risultati mostrano che, nonostante i progressi tecnologici, i modelli di IA rimangono suscettibili a bias cognitivi e manipolazioni, evidenziando una mancanza di trasparenza e robustezza in contesti decisionali critici. Le implicazioni etiche di questi risultati sono significative, poiché l'adozione diffusa dell'IA in ambiti sensibili come sanità, giustizia e istruzione richiede sistemi che minimizzino il rischio di distorsioni. La tesi conclude con raccomandazioni per migliorare la resilienza dell'IA, promuovendo un approccio responsabile e consapevole nello sviluppo e nell'implementazione di queste tecnologie.
CORRUZIONE ED INCENTIVI NELL'INTELLIGENZA ARTIFICIALE: POSSIBILITÀ, MECCANISMI E IMPLICAZIONI
TOMMASI, NICCOLÒ
2023/2024
Abstract
This thesis explores the vulnerability of artificial intelligence (AI) to corruption and cognitive biases, analyzing the dynamics and ethical implications associated with the use of large language models (LLMs) and other advanced AI technologies. Through a review of existing literature and an analysis of experimental tests, the study investigates AI’s potential to replicate human behaviors and influence decisions based on cognitive distortions. The research focuses on how techniques such as machine learning and deep learning have transformed AI, enabling it to interact with users in sophisticated yet not always impartial ways. Emphasizing the role of incentives and decision-making mechanisms within the AI economy, this thesis employs cognitive bias tests and prompt manipulation techniques to assess AI models’ resilience to external influences. The findings reveal that, despite technological advancements, AI models remain susceptible to cognitive biases and manipulations, underscoring a lack of transparency and robustness in critical decision-making contexts. The ethical implications of these findings are significant, as the widespread adoption of AI in sensitive areas such as healthcare, justice, and education demands systems that minimize the risk of distortions. The thesis concludes with recommendations to enhance AI resilience, advocating for a responsible and conscientious approach in the development and deployment of these technologies.File | Dimensione | Formato | |
---|---|---|---|
Tommasi_Niccolò.pdf
embargo fino al 13/12/2025
Dimensione
1.98 MB
Formato
Adobe PDF
|
1.98 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/81011