Information Technology (IT) user support often relies on knowledge scattered across documents inside companies. Finding the right, current answer quickly is hard, and closed-book language models can sound confident while being wrong. This thesis addresses that gap by combining retrieval-augmented generation (RAG) with a light agent layer to produce grounded, auditable assistance. RAG retrieves relevant passages at question time and conditions the response on that evidence, reducing outdated or unsupported claims. On top of this, a simple agent turns the retrieved context into practical outputs such as a clear explanation, a concise ticket, or step-by-step guidance while keeping citations and brief self-checks to maintain faithfulness. The work focuses on what matters most in practice: how documents are chunked, which embedding models are used, how retrieval is tuned, how vector search is configured. Evaluation combines standard retrieval metrics with judgment-based checks of faithfulness and usefulness, alongside operational signals like accuracy. Overall, the thesis offers a straightforward recipe for turning a general Large Language Model (LLM) into a grounded support assistant: retrieve first, write with evidence, and keep outputs simple and ready to act on

Information Technology (IT) user support often relies on knowledge scattered across documents inside companies. Finding the right, current answer quickly is hard, and closed-book language models can sound confident while being wrong. This thesis addresses that gap by combining retrieval-augmented generation (RAG) with a light agent layer to produce grounded, auditable assistance. RAG retrieves relevant passages at question time and conditions the response on that evidence, reducing outdated or unsupported claims. On top of this, a simple agent turns the retrieved context into practical outputs such as a clear explanation, a concise ticket, or step-by-step guidance while keeping citations and brief self-checks to maintain faithfulness. The work focuses on what matters most in practice: how documents are chunked, which embedding models are used, how retrieval is tuned, how vector search is configured. Evaluation combines standard retrieval metrics with judgment-based checks of faithfulness and usefulness, alongside operational signals like accuracy. Overall, the thesis offers a straightforward recipe for turning a general Large Language Model (LLM) into a grounded support assistant: retrieve first, write with evidence, and keep outputs simple and ready to act on

AI-Driven User Support: A RAG Approach

AKYOL, AHMED
2024/2025

Abstract

Information Technology (IT) user support often relies on knowledge scattered across documents inside companies. Finding the right, current answer quickly is hard, and closed-book language models can sound confident while being wrong. This thesis addresses that gap by combining retrieval-augmented generation (RAG) with a light agent layer to produce grounded, auditable assistance. RAG retrieves relevant passages at question time and conditions the response on that evidence, reducing outdated or unsupported claims. On top of this, a simple agent turns the retrieved context into practical outputs such as a clear explanation, a concise ticket, or step-by-step guidance while keeping citations and brief self-checks to maintain faithfulness. The work focuses on what matters most in practice: how documents are chunked, which embedding models are used, how retrieval is tuned, how vector search is configured. Evaluation combines standard retrieval metrics with judgment-based checks of faithfulness and usefulness, alongside operational signals like accuracy. Overall, the thesis offers a straightforward recipe for turning a general Large Language Model (LLM) into a grounded support assistant: retrieve first, write with evidence, and keep outputs simple and ready to act on
2024
AI-Driven User Support: A RAG Approach
Information Technology (IT) user support often relies on knowledge scattered across documents inside companies. Finding the right, current answer quickly is hard, and closed-book language models can sound confident while being wrong. This thesis addresses that gap by combining retrieval-augmented generation (RAG) with a light agent layer to produce grounded, auditable assistance. RAG retrieves relevant passages at question time and conditions the response on that evidence, reducing outdated or unsupported claims. On top of this, a simple agent turns the retrieved context into practical outputs such as a clear explanation, a concise ticket, or step-by-step guidance while keeping citations and brief self-checks to maintain faithfulness. The work focuses on what matters most in practice: how documents are chunked, which embedding models are used, how retrieval is tuned, how vector search is configured. Evaluation combines standard retrieval metrics with judgment-based checks of faithfulness and usefulness, alongside operational signals like accuracy. Overall, the thesis offers a straightforward recipe for turning a general Large Language Model (LLM) into a grounded support assistant: retrieve first, write with evidence, and keep outputs simple and ready to act on
LLM
RAG
Embedding
File in questo prodotto:
File Dimensione Formato  
Akyol_Ahmed.pdf

Accesso riservato

Dimensione 1.64 MB
Formato Adobe PDF
1.64 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/93729