Multi-branch adaptive fusion for multimodal magnetic resonance image synthesis(PDF)
《中国医学物理学杂志》[ISSN:1005-202X/CN:44-1351/R]
- Issue:
- 2026年第4期
- Page:
- 436-444
- Research Field:
- 医学影像物理
- Publishing date:
Info
- Title:
- Multi-branch adaptive fusion for multimodal magnetic resonance image synthesis
- Author(s):
- QIU Hongxuan1; 2; LIU Mingyang3; LIU Surui2; WANG Yuxi1; 2; PENG Bo2; DAI Yakang2; YU Weibo1
- 1. School of Electrical and Electronic Engineering, Changchun University of Technology, Changchun 130012, China 2.Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China 3. School of Electrical and Mechanical Engineering, Jilin University of Architecture and Technology, Changchun 130114, China
- Keywords:
- Keywords: magnetic resonance imaging multimodal image synthesis generative adversarial networks feature consistency
- PACS:
- R318
- DOI:
- DOI:10.3969/j.issn.1005-202X.2026.04.003
- Abstract:
- Abstract: Multimodal magnetic resonance imaging provides comprehensive and essential information for clinical diagnosis and treatment, and synthesizing missing modality images can significantly improve medical analysis. To address the limitations of existing image synthesis methods, including fixed input modalities, suboptimal image quality, and insufficient anatomical fidelity in synthesized images, a multi-branch modality-adaptive fusion and consistency-guided generative adversarial network is proposed for multimodal magnetic resonance image synthesis. Within this network, efficient medical image synthesis from arbitrary modality combinations is achieved through the incorporation of target modality labels, a zero-image placeholder strategy, and a multi-branch encoding mechanism. Specifically, target modality labels are used to guide the decoder to focus on desired features, a zero-image placeholder strategy is employed to address incomplete input modalities, and a multi-branch encoding mechanism is adopted to ensure independent feature extraction and flexible cross-modality fusion, thereby enhancing the models adaptability to multimodal inputs and its synthesis performance. Furthermore, a modality consistency guidance mechanism is introduced to align encoded features from different modalities in the latent space, reinforcing cross-modal anatomical consistency and thus improving the anatomical fidelity and overall quality of synthesized images. The effectiveness of the proposed method is validated on the BraTS2020 and ISLES2015 datasets. Experimental results demonstrate that the synthesized images achieve a peak signal-to-noise ratio exceeding 24 dB, outperforming the existing image synthesis methods and exhibiting superior visual quality.
Last Update: 2026-04-28