Large Language Models (LLMs) offer immense potential for knowledge-intensive in- dustries, but their adoption is often hindered by hallucinations. While Retrieval-Augmented Generation (RAG) mitigates this, standard pipelines fre- quently struggle with retrieval noise and complex multi-hop reasoning. This thesis presents a highly optimized Multi-Agent RAG architecture tailored for the Italian in- surance sector, developed during an apprenticeship at Data Reply. To overcome baseline limitations, the system employs Hybrid Retrieval (combin- ing BM25 and dense embeddings) alongside customized MAIN-RAG and RAGentA frameworks. The MAIN-RAG architecture was streamlined by eliminating the initial Predictor agent; instead, a Judge agent directly scores raw documents, filtering noise while minimizing computational overhead. For complex queries, a modified RAGentA pipeline introduces an autonomous Reviser agent. This Reviser uses a Multi-Query ap- proach to trigger targeted secondary retrievals, deliberately reusing the Judge agent’s logic to filter newly retrieved documents before synthesizing the final answer. Rigorous testing via RAGAS, RAGEval, and human expert evaluation across a cus- tom dataset of Italian insurance documents demonstrates profound improvements over a Baseline RAG. The modified MAIN-RAG increased Faithfulness from 71.34% to 95.42%, reduced Hallucinations from 12.23% to 7.14%, and achieved a 90% human success rate. The modified RAGentA pipeline excelled in complex reasoning, pushing the success rate to 94%, lowering Hallucinations to 4.37%, and maximizing Answer Completeness at 82.43%. Reusing the Judge agent during revision proved critical for decreasing token usage while maintaining high performance, demonstrating that strategically tailored LLM agents provide a highly effective, cost-efficient solution for trustworthy enterprise AI.
I Large Language Models (LLM) offrono un potenziale immenso per gli ambiti azien- dali, ma la loro adozione è spesso ostacolata dalle allucinazioni. Sebbene la Retrieval-Augmented Generation (RAG) mitighi questo problema, le pipeline standard faticano frequentemente a gestire il rumore nel recupero delle informazioni e i complessi ragionamenti a più passaggi. Questa tesi presenta un’architettura Multi- Agent RAG altamente ottimizzata e pensata su misura per il settore assicurativo ital- iano, sviluppata durante un periodo di tirocinio presso Data Reply. Per superare i limiti dei sistemi di base, la soluzione impiega un approccio di Hybrid Retrieval (che combina BM25 ed embedding densi) affiancato a versioni personalizzate dei framework MAIN-RAG e RAGentA. L’architettura MAIN-RAG è stata snellita elim- inando l’agente Predictor iniziale; al suo posto, un agente Judge valuta direttamente i documenti, filtrando il rumore e riducendo al minimo il costo computazionale. Per le domande complesse, un framework RAGentA modificato introduce un agente Reviser autonomo. Questo Reviser utilizza un approccio Multi-Query per innescare cicli di re- trieval secondari mirati, riutilizzando la logica dell’agente Judge per filtrare i documenti appena recuperati prima di generare la risposta finale. Test rigorosi condotti tramite RAGAS, RAGEval e la valutazione di esperti su un dataset personalizzato di documenti assicurativi italiani dimostrano miglioramenti sig- nificativi rispetto a una Baseline RAG. Il MAIN-RAG modificato ha incrementato la Fedeltà (Faithfulness) dal 71,34% al 95,42%, ha ridotto le Allucinazioni dal 12,23% al 7,14% e ha raggiunto un tasso di successo umano del 90%. La pipeline RAGentA mod- ificata ha eccelso nel ragionamento complesso, spingendo il tasso di successo al 94%, abbassando le Allucinazioni al 4,37% e massimizzando la Completezza della risposta all’82,43%. Il riutilizzo dell’agente Judge durante la revisione si è rivelato fondamentale per ridurre l’utilizzo dei token pur mantenendo prestazioni elevate, dimostrando che la personalizzazione di agenti LLM interagenti fornisce una soluzione altamente effi- cace ed efficiente in termini di costi per la costruzione di applicazioni di IA generativa affidabili in ambito aziendale.
Optimizing Precision and Completeness in Domain-Specific RAG Systems: A Multi-Agent Approach
COCIANCICH, FABIO
2025/2026
Abstract
Large Language Models (LLMs) offer immense potential for knowledge-intensive in- dustries, but their adoption is often hindered by hallucinations. While Retrieval-Augmented Generation (RAG) mitigates this, standard pipelines fre- quently struggle with retrieval noise and complex multi-hop reasoning. This thesis presents a highly optimized Multi-Agent RAG architecture tailored for the Italian in- surance sector, developed during an apprenticeship at Data Reply. To overcome baseline limitations, the system employs Hybrid Retrieval (combin- ing BM25 and dense embeddings) alongside customized MAIN-RAG and RAGentA frameworks. The MAIN-RAG architecture was streamlined by eliminating the initial Predictor agent; instead, a Judge agent directly scores raw documents, filtering noise while minimizing computational overhead. For complex queries, a modified RAGentA pipeline introduces an autonomous Reviser agent. This Reviser uses a Multi-Query ap- proach to trigger targeted secondary retrievals, deliberately reusing the Judge agent’s logic to filter newly retrieved documents before synthesizing the final answer. Rigorous testing via RAGAS, RAGEval, and human expert evaluation across a cus- tom dataset of Italian insurance documents demonstrates profound improvements over a Baseline RAG. The modified MAIN-RAG increased Faithfulness from 71.34% to 95.42%, reduced Hallucinations from 12.23% to 7.14%, and achieved a 90% human success rate. The modified RAGentA pipeline excelled in complex reasoning, pushing the success rate to 94%, lowering Hallucinations to 4.37%, and maximizing Answer Completeness at 82.43%. Reusing the Judge agent during revision proved critical for decreasing token usage while maintaining high performance, demonstrating that strategically tailored LLM agents provide a highly effective, cost-efficient solution for trustworthy enterprise AI.| File | Dimensione | Formato | |
|---|---|---|---|
|
Cociancich_Fabio.pdf
embargo fino al 14/04/2029
Dimensione
6.75 MB
Formato
Adobe PDF
|
6.75 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/107321