In recent years, Deep Learning (DL) techniques have been successfully applied to various medical applications, achieving remarkable results. In particular, in the field of medical imaging, Deep Learning models have reached human-level performance, especially in disease diagnosis using Chest X-ray images, thanks to the availability of multiple hospital-scale related datasets. However, traditional Deep Learning techniques face limitations when applied to the medical field. First of all, they are suited only for static settings: when a DL model is trained sequentially on new data the performance on the old data is degraded; this phenomenon is known as Catastrophic Forgetting. This limitation becomes significant in the medical field in which new patient data from unseen populations may arrive, new diseases may emerge, or disease prevalence may change over time. Second of all, Deep Learning algorithms can be biased against certain sub-populations, meaning that they may exhibit gaps in predictive performance across groups, defined by protected attributes such as age, race/ethnicity, sex/- gender, and socioeconomic status. In our work, we study techniques to overcome these issues by considering the problem of pathology classification of Chest X-ray images in a setting in which the data arrives over time in a stream of tasks - such that each task contains new pathologies - and it contains information about protected attributes. To train the model sequentially on each task without forgetting the pathologies of the old tasks we implement Continual Learning techniques, while Fairness strategies are implemented to detect the bias. Moreover, we analyze how the bias evolves from one task to another, and the influence of the Continual Learning strategies on such evolution.

In recent years, Deep Learning (DL) techniques have been successfully applied to various medical applications, achieving remarkable results. In particular, in the field of medical imaging, Deep Learning models have reached human-level performance, especially in disease diagnosis using Chest X-ray images, thanks to the availability of multiple hospital-scale related datasets. However, traditional Deep Learning techniques face limitations when applied to the medical field. First of all, they are suited only for static settings: when a DL model is trained sequentially on new data the performance on the old data is degraded; this phenomenon is known as Catastrophic Forgetting. This limitation becomes significant in the medical field in which new patient data from unseen populations may arrive, new diseases may emerge, or disease prevalence may change over time. Second of all, Deep Learning algorithms can be biased against certain sub-populations, meaning that they may exhibit gaps in predictive performance across groups, defined by protected attributes such as age, race/ethnicity, sex/- gender, and socioeconomic status. In our work, we study techniques to overcome these issues by considering the problem of pathology classification of Chest X-ray images in a setting in which the data arrives over time in a stream of tasks - such that each task contains new pathologies - and it contains information about protected attributes. To train the model sequentially on each task without forgetting the pathologies of the old tasks we implement Continual Learning techniques, while Fairness strategies are implemented to detect the bias. Moreover, we analyze how the bias evolves from one task to another, and the influence of the Continual Learning strategies on such evolution.

Continual Learning and Fairness Techniques for Pathology Classification of Chest X-ray Images

CECCON, MARINA
2022/2023

Abstract

In recent years, Deep Learning (DL) techniques have been successfully applied to various medical applications, achieving remarkable results. In particular, in the field of medical imaging, Deep Learning models have reached human-level performance, especially in disease diagnosis using Chest X-ray images, thanks to the availability of multiple hospital-scale related datasets. However, traditional Deep Learning techniques face limitations when applied to the medical field. First of all, they are suited only for static settings: when a DL model is trained sequentially on new data the performance on the old data is degraded; this phenomenon is known as Catastrophic Forgetting. This limitation becomes significant in the medical field in which new patient data from unseen populations may arrive, new diseases may emerge, or disease prevalence may change over time. Second of all, Deep Learning algorithms can be biased against certain sub-populations, meaning that they may exhibit gaps in predictive performance across groups, defined by protected attributes such as age, race/ethnicity, sex/- gender, and socioeconomic status. In our work, we study techniques to overcome these issues by considering the problem of pathology classification of Chest X-ray images in a setting in which the data arrives over time in a stream of tasks - such that each task contains new pathologies - and it contains information about protected attributes. To train the model sequentially on each task without forgetting the pathologies of the old tasks we implement Continual Learning techniques, while Fairness strategies are implemented to detect the bias. Moreover, we analyze how the bias evolves from one task to another, and the influence of the Continual Learning strategies on such evolution.
2022
Continual Learning and Fairness Techniques for Pathology Classification of Chest X-ray Images
In recent years, Deep Learning (DL) techniques have been successfully applied to various medical applications, achieving remarkable results. In particular, in the field of medical imaging, Deep Learning models have reached human-level performance, especially in disease diagnosis using Chest X-ray images, thanks to the availability of multiple hospital-scale related datasets. However, traditional Deep Learning techniques face limitations when applied to the medical field. First of all, they are suited only for static settings: when a DL model is trained sequentially on new data the performance on the old data is degraded; this phenomenon is known as Catastrophic Forgetting. This limitation becomes significant in the medical field in which new patient data from unseen populations may arrive, new diseases may emerge, or disease prevalence may change over time. Second of all, Deep Learning algorithms can be biased against certain sub-populations, meaning that they may exhibit gaps in predictive performance across groups, defined by protected attributes such as age, race/ethnicity, sex/- gender, and socioeconomic status. In our work, we study techniques to overcome these issues by considering the problem of pathology classification of Chest X-ray images in a setting in which the data arrives over time in a stream of tasks - such that each task contains new pathologies - and it contains information about protected attributes. To train the model sequentially on each task without forgetting the pathologies of the old tasks we implement Continual Learning techniques, while Fairness strategies are implemented to detect the bias. Moreover, we analyze how the bias evolves from one task to another, and the influence of the Continual Learning strategies on such evolution.
Continual Learning
Fairness
Chest X-ray
bias
File in questo prodotto:
File Dimensione Formato  
Ceccon_Marina.pdf

embargo fino al 08/03/2025

Dimensione 4.19 MB
Formato Adobe PDF
4.19 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/50907