AI chatbots are increasingly used in sensitive domain such as health, education and mental health support, user trust has become a key factor in their effectiveness and adoption. This literature review, conducted according to the PRISMA methodology, synthesizes current research to identify specific linguistic and conversational features that trigger trust in human-chatbot interaction. The analysis categorized into four core dimensions: politeness, communication style and tone, empathy, and anthropomorphism. Key findings are that trust is a dynamic and content dependent process. The review synthesizes findings on features such as use of pronouns, authoritative and powerless tone, use of disclaimers, verb tense and active voice. By integrating insights across disciplines, this thesis identifies key mechanisms that foster trust in AI chatbots and highlights gaps in the existing literature, offering implications for the design of more trustworthy chatbots. 

AI chatbots are increasingly used in sensitive domain such as health, education and mental health support, user trust has become a key factor in their effectiveness and adoption. This literature review, conducted according to the PRISMA methodology, synthesizes current research to identify specific linguistic and conversational features that trigger trust in human-chatbot interaction. The analysis categorized into four core dimensions: politeness, communication style and tone, empathy, and anthropomorphism. Key findings are that trust is a dynamic and content dependent process. The review synthesizes findings on features such as use of pronouns, authoritative and powerless tone, use of disclaimers, verb tense and active voice. By integrating insights across disciplines, this thesis identifies key conversational mechanisms that foster trust in AI chatbots and highlights gaps in the existing literature, offering implications for the design of more trustworthy conversational agents. 

Linguistic and conversational features of AI chatbots triggering users' trust: a literature review

SERIK, AZHAR
2025/2026

Abstract

AI chatbots are increasingly used in sensitive domain such as health, education and mental health support, user trust has become a key factor in their effectiveness and adoption. This literature review, conducted according to the PRISMA methodology, synthesizes current research to identify specific linguistic and conversational features that trigger trust in human-chatbot interaction. The analysis categorized into four core dimensions: politeness, communication style and tone, empathy, and anthropomorphism. Key findings are that trust is a dynamic and content dependent process. The review synthesizes findings on features such as use of pronouns, authoritative and powerless tone, use of disclaimers, verb tense and active voice. By integrating insights across disciplines, this thesis identifies key mechanisms that foster trust in AI chatbots and highlights gaps in the existing literature, offering implications for the design of more trustworthy chatbots. 
2025
Linguistic and conversational features of AI chatbots triggering users' trust: a literature review
AI chatbots are increasingly used in sensitive domain such as health, education and mental health support, user trust has become a key factor in their effectiveness and adoption. This literature review, conducted according to the PRISMA methodology, synthesizes current research to identify specific linguistic and conversational features that trigger trust in human-chatbot interaction. The analysis categorized into four core dimensions: politeness, communication style and tone, empathy, and anthropomorphism. Key findings are that trust is a dynamic and content dependent process. The review synthesizes findings on features such as use of pronouns, authoritative and powerless tone, use of disclaimers, verb tense and active voice. By integrating insights across disciplines, this thesis identifies key conversational mechanisms that foster trust in AI chatbots and highlights gaps in the existing literature, offering implications for the design of more trustworthy conversational agents. 
AI
linguistic
trust
chatbot
File in questo prodotto:
File Dimensione Formato  
AZHAR Serik-2.pdf

accesso aperto

Dimensione 391.14 kB
Formato Adobe PDF
391.14 kB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/105056