Large Language Models (LLMs) represent a significant breakthrough in the evolution of artificial intelligence, redefining how humans interact with computational systems. However, their large-scale deployment raises critical concerns regarding transparency, bias, hallucinations, and environmental sustainability. This thesis critically examines these issues, providing an overview of the architectural foundations of LLMs and the current state of the art. It also addresses the role of Explainable AI as a tool to enhance model interpretability and accountability, fostering a more ethical and sustainable approach. The analysis aims to outline the challenges and opportunities related to the development of more reliable and understandable LLMs, suggesting future directions for responsible and informed use.
I Large Language Models (LLM) rappresentano una svolta significativa nell’evoluzione dell’intelligenza artificiale, ridefinendo le modalità di interazione tra esseri umani e sistemi computazionali. Tuttavia, il loro impiego su larga scala pone questioni cruciali riguardo a trasparenza, bias, allucinazioni e sostenibilità ambientale. Questa tesi esamina criticamente tali problematiche, offrendo una panoramica delle basi architetturali degli LLM e del loro stato dell’arte. Viene inoltre trattato il ruolo dell’Explainable AI come strumento per migliorare l’interpretabilità e la responsabilità dei modelli, promuovendo un approccio più etico e sostenibile. L’analisi mira a delineare le sfide e le opportunità legate allo sviluppo di LLM più affidabili e comprensibili, suggerendo direzioni future orientate a un uso consapevole e responsabile.
Large Language Models: criticità e sfide nell'elaborazione del linguaggio naturale e il ruolo dell'Explainable AI
RANZOLIN, NICOLA
2024/2025
Abstract
Large Language Models (LLMs) represent a significant breakthrough in the evolution of artificial intelligence, redefining how humans interact with computational systems. However, their large-scale deployment raises critical concerns regarding transparency, bias, hallucinations, and environmental sustainability. This thesis critically examines these issues, providing an overview of the architectural foundations of LLMs and the current state of the art. It also addresses the role of Explainable AI as a tool to enhance model interpretability and accountability, fostering a more ethical and sustainable approach. The analysis aims to outline the challenges and opportunities related to the development of more reliable and understandable LLMs, suggesting future directions for responsible and informed use.| File | Dimensione | Formato | |
|---|---|---|---|
|
llm_criticità_sfide_nlp_ruolo_xai.pdf
accesso aperto
Dimensione
2.42 MB
Formato
Adobe PDF
|
2.42 MB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/92216