Introduction: Breast cancer is a leading cause of morbidity and mortality worldwide, profoundly impacting patients' physical and psychological well-being. Accurate and accessible perioperative health information empowers patients and enhances recovery outcomes. Artificial intelligence (AI) tools like ChatGPT have gained attention for their potential in health communication. Yet, their reliability and usability for sensitive topics such as breast cancer surgery remain under evaluation. Objective: This study evaluates the accuracy and readability of responses generated by ChatGPT – 4o to questions commonly asked by breast cancer patients regarding preoperative preparation and postoperative recovery. Materials and Methods: An observational study was conducted using 15 simulated patient queries related to breast cancer surgery preparation and recovery. Responses generated by ChatGPT – 4o were evaluated for accuracy by two experienced breast surgeons using a 4-point Likert scale. Readability was assessed with the Flesch-Kincaid Grade Level (FKGL). Descriptive statistics were used to summarize the findings. Results: Of the 15 responses evaluated, 13 out of 15 were rated as “accurate and comprehensive”, while 2 out of 15 were deemed “correct but incomplete”. No responses were classified as “partially incorrect” or “completely incorrect”. The median FKGL score was 11.2, indicating a high school reading level. While most responses were technically accurate, the complexity of language exceeded the recommended readability levels for patient-directed materials. Discussion: ChatGPT – 4o demonstrated high accuracy in providing perioperative information but fell short in accessibility due to elevated readability scores. Its use as a supplementary tool in patient education is promising but requires careful oversight by healthcare professionals to address comprehensiveness and language adaptation limitations. Conclusion: ChatGPT – 4o shows potential as a complementary resource for patient education in breast cancer surgery but should not replace direct interaction with healthcare providers. Future research should focus on enhancing language models' ability to generate accessible and patient-friendly content while maintaining accuracy.

Introduction: Breast cancer is a leading cause of morbidity and mortality worldwide, profoundly impacting patients' physical and psychological well-being. Accurate and accessible perioperative health information empowers patients and enhances recovery outcomes. Artificial intelligence (AI) tools like ChatGPT have gained attention for their potential in health communication. Yet, their reliability and usability for sensitive topics such as breast cancer surgery remain under evaluation. Objective: This study evaluates the accuracy and readability of responses generated by ChatGPT – 4o to questions commonly asked by breast cancer patients regarding preoperative preparation and postoperative recovery. Materials and Methods: An observational study was conducted using 15 simulated patient queries related to breast cancer surgery preparation and recovery. Responses generated by ChatGPT – 4o were evaluated for accuracy by two experienced breast surgeons using a 4-point Likert scale. Readability was assessed with the Flesch-Kincaid Grade Level (FKGL). Descriptive statistics were used to summarize the findings. Results: Of the 15 responses evaluated, 13 out of 15 were rated as “accurate and comprehensive”, while 2 out of 15 were deemed “correct but incomplete”. No responses were classified as “partially incorrect” or “completely incorrect”. The median FKGL score was 11.2, indicating a high school reading level. While most responses were technically accurate, the complexity of language exceeded the recommended readability levels for patient-directed materials. Discussion: ChatGPT – 4o demonstrated high accuracy in providing perioperative information but fell short in accessibility due to elevated readability scores. Its use as a supplementary tool in patient education is promising but requires careful oversight by healthcare professionals to address comprehensiveness and language adaptation limitations. Conclusion: ChatGPT – 4o shows potential as a complementary resource for patient education in breast cancer surgery but should not replace direct interaction with healthcare providers. Future research should focus on enhancing language models' ability to generate accessible and patient-friendly content while maintaining accuracy.

Prepararsi e Recuperare dalla Chirurgia per il Cancro al Seno: Valutazione della Qualità e della Chiarezza delle Informazioni Fornite da ChatGPT-4

PALMARIN, ELENA
2023/2024

Abstract

Introduction: Breast cancer is a leading cause of morbidity and mortality worldwide, profoundly impacting patients' physical and psychological well-being. Accurate and accessible perioperative health information empowers patients and enhances recovery outcomes. Artificial intelligence (AI) tools like ChatGPT have gained attention for their potential in health communication. Yet, their reliability and usability for sensitive topics such as breast cancer surgery remain under evaluation. Objective: This study evaluates the accuracy and readability of responses generated by ChatGPT – 4o to questions commonly asked by breast cancer patients regarding preoperative preparation and postoperative recovery. Materials and Methods: An observational study was conducted using 15 simulated patient queries related to breast cancer surgery preparation and recovery. Responses generated by ChatGPT – 4o were evaluated for accuracy by two experienced breast surgeons using a 4-point Likert scale. Readability was assessed with the Flesch-Kincaid Grade Level (FKGL). Descriptive statistics were used to summarize the findings. Results: Of the 15 responses evaluated, 13 out of 15 were rated as “accurate and comprehensive”, while 2 out of 15 were deemed “correct but incomplete”. No responses were classified as “partially incorrect” or “completely incorrect”. The median FKGL score was 11.2, indicating a high school reading level. While most responses were technically accurate, the complexity of language exceeded the recommended readability levels for patient-directed materials. Discussion: ChatGPT – 4o demonstrated high accuracy in providing perioperative information but fell short in accessibility due to elevated readability scores. Its use as a supplementary tool in patient education is promising but requires careful oversight by healthcare professionals to address comprehensiveness and language adaptation limitations. Conclusion: ChatGPT – 4o shows potential as a complementary resource for patient education in breast cancer surgery but should not replace direct interaction with healthcare providers. Future research should focus on enhancing language models' ability to generate accessible and patient-friendly content while maintaining accuracy.
2023
Getting Ready for and Recovering from Breast Cancer Surgery: Assessing the Accuracy and Readability of Information Provided by ChatGPT-4
Introduction: Breast cancer is a leading cause of morbidity and mortality worldwide, profoundly impacting patients' physical and psychological well-being. Accurate and accessible perioperative health information empowers patients and enhances recovery outcomes. Artificial intelligence (AI) tools like ChatGPT have gained attention for their potential in health communication. Yet, their reliability and usability for sensitive topics such as breast cancer surgery remain under evaluation. Objective: This study evaluates the accuracy and readability of responses generated by ChatGPT – 4o to questions commonly asked by breast cancer patients regarding preoperative preparation and postoperative recovery. Materials and Methods: An observational study was conducted using 15 simulated patient queries related to breast cancer surgery preparation and recovery. Responses generated by ChatGPT – 4o were evaluated for accuracy by two experienced breast surgeons using a 4-point Likert scale. Readability was assessed with the Flesch-Kincaid Grade Level (FKGL). Descriptive statistics were used to summarize the findings. Results: Of the 15 responses evaluated, 13 out of 15 were rated as “accurate and comprehensive”, while 2 out of 15 were deemed “correct but incomplete”. No responses were classified as “partially incorrect” or “completely incorrect”. The median FKGL score was 11.2, indicating a high school reading level. While most responses were technically accurate, the complexity of language exceeded the recommended readability levels for patient-directed materials. Discussion: ChatGPT – 4o demonstrated high accuracy in providing perioperative information but fell short in accessibility due to elevated readability scores. Its use as a supplementary tool in patient education is promising but requires careful oversight by healthcare professionals to address comprehensiveness and language adaptation limitations. Conclusion: ChatGPT – 4o shows potential as a complementary resource for patient education in breast cancer surgery but should not replace direct interaction with healthcare providers. Future research should focus on enhancing language models' ability to generate accessible and patient-friendly content while maintaining accuracy.
ChatGPT
Health education
Mastectomy
File in questo prodotto:
File Dimensione Formato  
palmarin_elena_2024_09_12_tesi.pdf

accesso riservato

Dimensione 867.72 kB
Formato Adobe PDF
867.72 kB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/78604