This study explores the use of generative models, particularly Generative Adversarial Networks (GANs), for generating Dual-Energy Subtracted (DES) images in the context of Contrast-Enhanced Mammography (CEM) for breast cancer screening. Traditional CEM offers enhanced diagnostic performance but comes with certain risks and drawbacks. To mitigate these risks, we propose novel approaches for generating synthetic digital breast images (DES) that maintain the high contrast necessary for effective mass detection, without relying on contrast agent. We explore two main strategies: the first involves guiding the objective function of the GAN using the Contrast-to-Noise Ratio (CNR) in three different ways: SoftCNRloss, PeakCNRloss, and Adaptive Exponential Contrast Loss (AECloss). These approaches refine the GAN’s loss function to emphasize on the contrast of the masses. The second strategy introduces attention mechanisms into the U-Net architecture (IAMUNet) to improve feature extraction and generation, resulting in contrast-enhanced and high-quality image generation. Among these approaches, IAMUNet_256, a U-Net variant enhanced with Channel, Spatial, and Multi-Scale Channel Attention, demonstrates superior performance in generating high-quality DES images, both quantitatively and qualitatively. These results suggest that the proposed methods hold significant promise for improving diagnostic performance while reducing patient exposure to radiation and contrast agent.

This study explores the use of generative models, particularly Generative Adversarial Networks (GANs), for generating Dual-Energy Subtracted (DES) images in the context of Contrast-Enhanced Mammography (CEM) for breast cancer screening. Traditional CEM offers enhanced diagnostic performance but comes with certain risks and drawbacks. To mitigate these risks, we propose novel approaches for generating synthetic digital breast images (DES) that maintain the high contrast necessary for effective mass detection, without relying on contrast agent. We explore two main strategies: the first involves guiding the objective function of the GAN using the Contrast-to-Noise Ratio (CNR) in three different ways: SoftCNRloss, PeakCNRloss, and Adaptive Exponential Contrast Loss (AECloss). These approaches refine the GAN’s loss function to emphasize on the contrast of the masses. The second strategy introduces attention mechanisms into the U-Net architecture (IAMUNet) to improve feature extraction and generation, resulting in contrast-enhanced and high-quality image generation. Among these approaches, IAMUNet_256, a U-Net variant enhanced with Channel, Spatial, and Multi-Scale Channel Attention, demonstrates superior performance in generating high-quality DES images, both quantitatively and qualitatively. These results suggest that the proposed methods hold significant promise for improving diagnostic performance while reducing patient exposure to radiation and contrast agent.

Generative Adversarial Networks for Contrast-Enhanced Mammography: Translating Low-Energy Scans into Dual-Energy Subtracted Images

HOSSEINIPOUR, MOHAMMAD
2023/2024

Abstract

This study explores the use of generative models, particularly Generative Adversarial Networks (GANs), for generating Dual-Energy Subtracted (DES) images in the context of Contrast-Enhanced Mammography (CEM) for breast cancer screening. Traditional CEM offers enhanced diagnostic performance but comes with certain risks and drawbacks. To mitigate these risks, we propose novel approaches for generating synthetic digital breast images (DES) that maintain the high contrast necessary for effective mass detection, without relying on contrast agent. We explore two main strategies: the first involves guiding the objective function of the GAN using the Contrast-to-Noise Ratio (CNR) in three different ways: SoftCNRloss, PeakCNRloss, and Adaptive Exponential Contrast Loss (AECloss). These approaches refine the GAN’s loss function to emphasize on the contrast of the masses. The second strategy introduces attention mechanisms into the U-Net architecture (IAMUNet) to improve feature extraction and generation, resulting in contrast-enhanced and high-quality image generation. Among these approaches, IAMUNet_256, a U-Net variant enhanced with Channel, Spatial, and Multi-Scale Channel Attention, demonstrates superior performance in generating high-quality DES images, both quantitatively and qualitatively. These results suggest that the proposed methods hold significant promise for improving diagnostic performance while reducing patient exposure to radiation and contrast agent.
2023
Generative Adversarial Networks for Contrast-Enhanced Mammography: Translating Low-Energy Scans into Dual-Energy Subtracted Images
This study explores the use of generative models, particularly Generative Adversarial Networks (GANs), for generating Dual-Energy Subtracted (DES) images in the context of Contrast-Enhanced Mammography (CEM) for breast cancer screening. Traditional CEM offers enhanced diagnostic performance but comes with certain risks and drawbacks. To mitigate these risks, we propose novel approaches for generating synthetic digital breast images (DES) that maintain the high contrast necessary for effective mass detection, without relying on contrast agent. We explore two main strategies: the first involves guiding the objective function of the GAN using the Contrast-to-Noise Ratio (CNR) in three different ways: SoftCNRloss, PeakCNRloss, and Adaptive Exponential Contrast Loss (AECloss). These approaches refine the GAN’s loss function to emphasize on the contrast of the masses. The second strategy introduces attention mechanisms into the U-Net architecture (IAMUNet) to improve feature extraction and generation, resulting in contrast-enhanced and high-quality image generation. Among these approaches, IAMUNet_256, a U-Net variant enhanced with Channel, Spatial, and Multi-Scale Channel Attention, demonstrates superior performance in generating high-quality DES images, both quantitatively and qualitatively. These results suggest that the proposed methods hold significant promise for improving diagnostic performance while reducing patient exposure to radiation and contrast agent.
Deep Learning
GANs
Medical Imaging
Image Translation
AI in Healthcare
File in questo prodotto:
File Dimensione Formato  
Hosseinipour_Mohammad.pdf

accesso aperto

Dimensione 21.42 MB
Formato Adobe PDF
21.42 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/80203