[1]沈镇炯,彭昭,孟祥银,等.基于级联3D U-Net的CT和MR视交叉自动分割方法[J].中国医学物理学杂志,2021,38(8):950-954.[doi:DOI:10.3969/j.issn.1005-202X.2021.08.006]
 SHEN Zhenjiong,PENG Zhao,MENG Xiangyin,et al.Automatic optic chiasm segmentation using CT and MRI based on cascaded 3D U-Net[J].Chinese Journal of Medical Physics,2021,38(8):950-954.[doi:DOI:10.3969/j.issn.1005-202X.2021.08.006]
点击复制

基于级联3D U-Net的CT和MR视交叉自动分割方法()
分享到:

《中国医学物理学杂志》[ISSN:1005-202X/CN:44-1351/R]

卷:
38卷
期数:
2021年第8期
页码:
950-954
栏目:
医学影像物理
出版日期:
2021-08-02

文章信息/Info

Title:
Automatic optic chiasm segmentation using CT and MRI based on cascaded 3D U-Net
文章编号:
1005-202X(2021)08-0950-05
作者:
沈镇炯1彭昭1孟祥银1汪志12徐榭13裴曦14
1.中国科学技术大学核医学物理研究所, 安徽 合肥 230025; 2.安徽医科大学第一附属医院肿瘤放疗科, 安徽 合肥 230022; 3.中国科学技术大学附属第一医院放疗科, 安徽 合肥 230001; 4.安徽慧软科技有限公司, 安徽 合肥 230088
Author(s):
SHEN Zhenjiong1 PENG Zhao1 MENG Xiangyin1 WANG Zhi1 2 XU Xie1 3 PEI Xi 1 4
1. Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei 230025, China 2. Department of Radiation Oncology, the First Affiliated Hospital of Anhui Medical University, Hefei 230022, China 3. Department of Radiation Oncology, the First Affiliated Hospital of University of Science and Technology of China, Hefei 230001, China 4. Anhui Wisdom Technology Co., Ltd, Hefei 230088, China
关键词:
3D U-Net视交叉自动分割多模态
Keywords:
Keywords: 3D U-Net optic chiasm automatic segmentation multimodal
分类号:
R318;R811.1
DOI:
DOI:10.3969/j.issn.1005-202X.2021.08.006
文献标志码:
A
摘要:
目的:基于级联3D U-Net,利用配对患者头颈部数据[CT和磁共振图像(MRI)],取得比仅CT数据更高分割精度的视交叉自动分割结果。方法:该级联3D U-Net由一个原始3D U-Net和改进的3D D-S U-Net(3D Deeply-Supervised U-Net)组成,实验使用了60例患者头颈部CT图像及MRI图像(T1和T2模态),其中随机选取15例患者数据作为测试集,并使用相似性系数(DSC)评估视交叉的自动分割精度。结果:对于测试集中的所有病例,采用多模态数据(CT和MRI)的视交叉的DSC为0.645±0.085,采用单模态数据(CT)的视交叉的DSC为0.552±0.096。结论:基于级联3D U-Net的多模态自动分割模型能够较为准确地实现视交叉的自动分割,且优于仅利用单模态数据的方法,可以辅助医生提高放疗计划制定的工作效率。
Abstract:
Abstract: Objective To realize the automatic segmentation of the optic chiasm using multimodal images (CT and MRI) that contain head-and-neck data and based on cascaded 3D U-Net for obtaining a higher segmentation accuracy than using only CT data. Methods The proposed cascaded 3D U-Net consists of an original 3D U-Net and an improved 3D D-S U-Net (3D Deeply-Supervised U-Net). The head-and-neck CT images and MRI images (T1 and T2 modalities) of 60 patients were used in the experiment, and the data of 15 patient were randomly selected as the test set. Dice similarity coefficient was used to evaluate the accuracy of automatic optic chiasm segmentation. Results For all cases in the test set, the Dice similarity coefficient of the optic chiasm segmentation using multimodal data (CT and MRI) or monomodal data (CT) was 0.645±0.085 and 0.552 ±0.096, respectively. Conclusion The multimodal automatic segmentation model based on cascaded 3D U-Net can accurately realize the automatic segmentation of the optic chiasm, superior to the method using only monomodal data, and it can assist doctors in improving the efficiency of radiotherapy planning.

相似文献/References:

[1]温帆,杨萍,张鑫,等.基于特征增强的多分支U-Net肺结节分割[J].中国医学物理学杂志,2023,40(11):1343.[doi:DOI:10.3969/j.issn.1005-202X.2023.11.005]
 WEN Fan,YANG Ping,ZHANG Xin,et al.Pulmonary nodule segmentation using multi-branch U-Net based on feature enhancement[J].Chinese Journal of Medical Physics,2023,40(8):1343.[doi:DOI:10.3969/j.issn.1005-202X.2023.11.005]
[2]刘晶,徐皓,崔欣欣,等.基于多尺度边缘分割与混合注意力机制的脊柱CT图像分割[J].中国医学物理学杂志,2024,41(4):463.[doi:DOI:10.3969/j.issn.1005-202X.2024.04.011]
 LIU Jing,XU Hao,CUI Xinxin,et al.Spine CT image segmentation based on multi-scale boundary segmentation and hybrid attention mechanism[J].Chinese Journal of Medical Physics,2024,41(8):463.[doi:DOI:10.3969/j.issn.1005-202X.2024.04.011]

备注/Memo

备注/Memo:
【收稿日期】2021-03-19 【基金项目】安徽省自然科学基金(1908085MA27);安徽省重点研究与开发计划(1804a09020039);中国科学技术大学“Med-X医学物理和生物工程双一流交叉学科”建设经费 【作者简介】沈镇炯,硕士,主要从事医学图像分割、深度学习等研究,E-mail: szj0117@mail.ustc.edu.cn 【通信作者】裴曦,博士,副教授,主要从事医学物理、人工智能和医学影像等研究,E-mail: xpei@ustc.edu.cn
更新日期/Last Update: 2021-07-30