Back close

Bridging the Modality Gap: Generative Adversarial Networks for T1-T2 MRI Image Translation

Publication Type : Conference Paper

Publisher : IEEE

Source : 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT)

Url : 10.1109/ICCCNT61001.2024.10725367

Campus : Bengaluru

School : School of Computing

Year : 2024

Abstract : CycleGAN, a significant deep learning model, has transformed medical imaging by providing a remedy for translating MRI images between different modalities. Its ability to transform T1-weighted to T2-weighted images is crucial in healthcare environments where obtaining multiple scans can be time-consuming and burdensome for patients. This can be achieved by utilizing CycleGAN, which generates images that contains vital information that will help the healthcare professionals to diagnose the patient without exposing them to further radiations. This unleashes an effective and safer way of diagnosing patients. The adaptability of CycleGAN also allows scalability and the ability to customize it for different types of medical images, where scanning can be achieved with less exposure to radiation, without loosing or compromising any important information.

Cite this Research Publication : Pandey, Mayank, Naga Sai Shreya Kunda, Pranave Kc, Tripty Singh, and Rekha R. Nair. "Bridging the Modality Gap: Generative Adversarial Networks for T1-T2 MRI Image Translation." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), pp. 1-5. IEEE, 2024.

Admissions Apply Now