Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) work hand in hand in radiation therapy planning—CT delivers essential attenuation data for accurate dose calculations, while MRI excels at capturing detailed soft tissue contrasts, enhancing precision in treatment planning. However, repeated CT scans pose risks due to excess radiation exposure and higher costs. This thesis proposes a deep learning approach using a 2D U-Net model for generating synthetic CT (sCT) images from MRI data. While this approach shows promise in reducing dependence on CT, further advancements are essential before sCT can fully replace CT in treatment planning. The study focuses on neurodegenerative patients, whose imaging often suffers from artifacts due to involuntary movements. We develop a robust preprocessing pipeline using 3D Slicer, an open software in the medical imaging community, to achieve suitable rigid registration of CT and MRI images. The U-Net model was pre-trained on a high-quality dataset of Glioblastoma patients and then fine-tuned using transfer learning on the target dataset of neurodegenerative patients. We evaluate the model's performance using Mean Absolute Error (MAE) and Mean Error (ME), highlighting challenges posed by patient movement and metal artifacts. The results demonstrate the model's capability to generate accurate sCT images, though with varying performance depending on the quality of the input MRI. In the final analysis, this work underscores the potential of deep learning in reducing radiation exposure in radiotherapy but also highlights the need for further refinement in handling artifacts.
Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) work hand in hand in radiation therapy planning—CT delivers essential attenuation data for accurate dose calculations, while MRI excels at capturing detailed soft tissue contrasts, enhancing precision in treatment planning. However, repeated CT scans pose risks due to excess radiation exposure and higher costs. This thesis proposes a deep learning approach using a 2D U-Net model for generating synthetic CT (sCT) images from MRI data. While this approach shows promise in reducing dependence on CT, further advancements are essential before sCT can fully replace CT in treatment planning. The study focuses on neurodegenerative patients, whose imaging often suffers from artifacts due to involuntary movements. We develop a robust preprocessing pipeline using 3D Slicer, an open software in the medical imaging community, to achieve suitable rigid registration of CT and MRI images. The U-Net model was pre-trained on a high-quality dataset of Glioblastoma patients and then fine-tuned using transfer learning on the target dataset of neurodegenerative patients. We evaluate the model's performance using Mean Absolute Error (MAE) and Mean Error (ME), highlighting challenges posed by patient movement and metal artifacts. The results demonstrate the model's capability to generate accurate sCT images, though with varying performance depending on the quality of the input MRI. In the final analysis, this work underscores the potential of deep learning in reducing radiation exposure in radiotherapy but also highlights the need for further refinement in handling artifacts.
Synthetic CT Generation from MR Images : A U-Net Deep Learning Approach
JAFARPOUR, FARSHAD
2023/2024
Abstract
Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) work hand in hand in radiation therapy planning—CT delivers essential attenuation data for accurate dose calculations, while MRI excels at capturing detailed soft tissue contrasts, enhancing precision in treatment planning. However, repeated CT scans pose risks due to excess radiation exposure and higher costs. This thesis proposes a deep learning approach using a 2D U-Net model for generating synthetic CT (sCT) images from MRI data. While this approach shows promise in reducing dependence on CT, further advancements are essential before sCT can fully replace CT in treatment planning. The study focuses on neurodegenerative patients, whose imaging often suffers from artifacts due to involuntary movements. We develop a robust preprocessing pipeline using 3D Slicer, an open software in the medical imaging community, to achieve suitable rigid registration of CT and MRI images. The U-Net model was pre-trained on a high-quality dataset of Glioblastoma patients and then fine-tuned using transfer learning on the target dataset of neurodegenerative patients. We evaluate the model's performance using Mean Absolute Error (MAE) and Mean Error (ME), highlighting challenges posed by patient movement and metal artifacts. The results demonstrate the model's capability to generate accurate sCT images, though with varying performance depending on the quality of the input MRI. In the final analysis, this work underscores the potential of deep learning in reducing radiation exposure in radiotherapy but also highlights the need for further refinement in handling artifacts.File | Dimensione | Formato | |
---|---|---|---|
Jafarpour_Farshad.pdf
accesso aperto
Dimensione
2.77 MB
Formato
Adobe PDF
|
2.77 MB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/73701