Abstract:
This manuscript explores the use of hybrid deep learning architectures for breast tumor segmentation in Dynamic Contrast-Enhanced MRI (DCE-MRI), using the public BreastDM dataset.
After reviewing recent state-of-the-art methods, we identified key challenges related to segmentation accuracy and model robustness in this domain. To address these issues, we investigated three
configurations: a standalone TransUNet, a cGAN with a PatchGAN discriminator, and a cGAN
employing a hybrid CNN-Transformer discriminator. Experimental results show that the standalone TransUNet achieves a mean Dice score of 74%, outperforming several existing models and
ranking among the best-performing approaches on this dataset. The cGAN-based variants also
demonstrated promising results, highlighting their ability to produce realistic and coherent segmentations. Nevertheless, our internal comparison revealed that direct supervision in the TransUNet
led to more stable and efficient learning. In summary, this work offers two main contributions:
it establishes TransUNet as a strong baseline for DCE-MRI breast tumor segmentation, and it
provides the first empirical study of cGAN-based approaches in this context, offering useful benchmarks and insights for future research.