This thesis explores the potential of large language models (LLMs) in educational settings by evaluating their ability to generate texts that align with human-written texts in terms of functional adequacy. Using a novel approach, this study utilizes the functional adequacy (FA) scale to assess the communicative effectiveness of LLM-generated texts, focusing on four dimensions: content, task requirements, comprehensibility, and coherence and cohesion. The results of this study will provide valuable insights into the practical implications of integrating LLMs in educational contexts, shedding light on their potential benefits and limitations for language learning and assessment.

This thesis explores the potential of large language models (LLMs) in educational settings by evaluating their ability to generate texts that align with human-written texts in terms of functional adequacy. Using a novel approach, this study utilizes the functional adequacy (FA) scale to assess the communicative effectiveness of LLM-generated texts, focusing on four dimensions: content, task requirements, comprehensibility, and coherence and cohesion. The results of this study will provide valuable insights into the practical implications of integrating LLMs in educational contexts, shedding light on their potential benefits and limitations for language learning and assessment.

Assessing the Suitability of AI-Generated Texts Across Proficiency Levels Using the Functional Adequacy Scale

AKOVA, ALP
2023/2024

Abstract

This thesis explores the potential of large language models (LLMs) in educational settings by evaluating their ability to generate texts that align with human-written texts in terms of functional adequacy. Using a novel approach, this study utilizes the functional adequacy (FA) scale to assess the communicative effectiveness of LLM-generated texts, focusing on four dimensions: content, task requirements, comprehensibility, and coherence and cohesion. The results of this study will provide valuable insights into the practical implications of integrating LLMs in educational contexts, shedding light on their potential benefits and limitations for language learning and assessment.
2023
Assessing the Suitability of AI-Generated Texts Across Proficiency Levels Using the Functional Adequacy Scale
This thesis explores the potential of large language models (LLMs) in educational settings by evaluating their ability to generate texts that align with human-written texts in terms of functional adequacy. Using a novel approach, this study utilizes the functional adequacy (FA) scale to assess the communicative effectiveness of LLM-generated texts, focusing on four dimensions: content, task requirements, comprehensibility, and coherence and cohesion. The results of this study will provide valuable insights into the practical implications of integrating LLMs in educational contexts, shedding light on their potential benefits and limitations for language learning and assessment.
Large Language Model
Functional Adequacy
Educational AI
File in questo prodotto:
File Dimensione Formato  
Assessing the Suitability of AI-Generated Texts For Language Teaching Across Proficiency Levels Using the Functional Adequacy Scale.pdf

accesso aperto

Dimensione 1.02 MB
Formato Adobe PDF
1.02 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/67101