Students' knowledge assessment, particularly by oral examinations, is a resource-consuming process classically requiring human experts. This thesis investigates the development of a self-assessment tool aiming at making possible the automated evaluation of medical exam candidates. Powered by current developments in NLP and multimodal analysis, this will help with the automation of question generation and the evaluation of student responses. The tool has two main components: the natural language processing and the multimodal analysis components. The NLP component is responsible for question generation from provided textbooks, speech-to-text for oral responses, and further evaluation of such responses. This is achieved by the use of language models and advanced LLMs in ensuring parsing and generation of accurate medical-related content. This evaluation is enriched with multimodal analysis that will treat prosodic features and facial expressions to identify student confidence and uncertainty to resemble an evaluation as of human examiners. Alongside the system, there is a web application developed for users to upload educational materials and self-assess themselves, providing grounds for testing and experiments. The conducted experiments have shown that the system provides effective and accurate feedback in an acceptable duration. This self-assessment tool is a notable step toward progress in medical education by providing a scalable, efficient, and accurate way of evaluating students, with the ultimate goals of improving learning outcomes while easing educators' workload.

Students' knowledge assessment, particularly by oral examinations, is a resource-consuming process classically requiring human experts. This thesis investigates the development of a self-assessment tool aiming at making possible the automated evaluation of medical exam candidates. Powered by current developments in NLP and multimodal analysis, this will help with the automation of question generation and the evaluation of student responses. The tool has two main components: the natural language processing and the multimodal analysis components. The NLP component is responsible for question generation from provided textbooks, speech-to-text for oral responses, and further evaluation of such responses. This is achieved by the use of language models and advanced LLMs in ensuring parsing and generation of accurate medical-related content. This evaluation is enriched with multimodal analysis that will treat prosodic features and facial expressions to identify student confidence and uncertainty to resemble an evaluation as of human examiners. Alongside the system, there is a web application developed for users to upload educational materials and self-assess themselves, providing grounds for testing and experiments. The conducted experiments have shown that the system provides effective and accurate feedback in an acceptable duration. This self-assessment tool is a notable step toward progress in medical education by providing a scalable, efficient, and accurate way of evaluating students, with the ultimate goals of improving learning outcomes while easing educators' workload.

Self Assessment Tool for Medical Exam Candidates

DEHGHAN, ALIREZA
2023/2024

Abstract

Students' knowledge assessment, particularly by oral examinations, is a resource-consuming process classically requiring human experts. This thesis investigates the development of a self-assessment tool aiming at making possible the automated evaluation of medical exam candidates. Powered by current developments in NLP and multimodal analysis, this will help with the automation of question generation and the evaluation of student responses. The tool has two main components: the natural language processing and the multimodal analysis components. The NLP component is responsible for question generation from provided textbooks, speech-to-text for oral responses, and further evaluation of such responses. This is achieved by the use of language models and advanced LLMs in ensuring parsing and generation of accurate medical-related content. This evaluation is enriched with multimodal analysis that will treat prosodic features and facial expressions to identify student confidence and uncertainty to resemble an evaluation as of human examiners. Alongside the system, there is a web application developed for users to upload educational materials and self-assess themselves, providing grounds for testing and experiments. The conducted experiments have shown that the system provides effective and accurate feedback in an acceptable duration. This self-assessment tool is a notable step toward progress in medical education by providing a scalable, efficient, and accurate way of evaluating students, with the ultimate goals of improving learning outcomes while easing educators' workload.
2023
Self Assessment Tool for Medical Exam Candidates
Students' knowledge assessment, particularly by oral examinations, is a resource-consuming process classically requiring human experts. This thesis investigates the development of a self-assessment tool aiming at making possible the automated evaluation of medical exam candidates. Powered by current developments in NLP and multimodal analysis, this will help with the automation of question generation and the evaluation of student responses. The tool has two main components: the natural language processing and the multimodal analysis components. The NLP component is responsible for question generation from provided textbooks, speech-to-text for oral responses, and further evaluation of such responses. This is achieved by the use of language models and advanced LLMs in ensuring parsing and generation of accurate medical-related content. This evaluation is enriched with multimodal analysis that will treat prosodic features and facial expressions to identify student confidence and uncertainty to resemble an evaluation as of human examiners. Alongside the system, there is a web application developed for users to upload educational materials and self-assess themselves, providing grounds for testing and experiments. The conducted experiments have shown that the system provides effective and accurate feedback in an acceptable duration. This self-assessment tool is a notable step toward progress in medical education by providing a scalable, efficient, and accurate way of evaluating students, with the ultimate goals of improving learning outcomes while easing educators' workload.
Answer Assessment
Question Generation
Self Assessment
AQG
File in questo prodotto:
File Dimensione Formato  
Thesis Final.pdf

accesso riservato

Dimensione 1.92 MB
Formato Adobe PDF
1.92 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/70905