[1]李雪,周金治,莫春梅,等.基于特征融合的U-Net肺自动分割方法[J].中国医学物理学杂志,2021,38(6):704-712.[doi:DOI:10.3969/j.issn.1005-202X.2021.06.009]
 LI Xue,ZHOU Jinzhi,et al.U-Net automatic lung segmentation based on feature fusion[J].Chinese Journal of Medical Physics,2021,38(6):704-712.[doi:DOI:10.3969/j.issn.1005-202X.2021.06.009]
点击复制

基于特征融合的U-Net肺自动分割方法()
分享到:

《中国医学物理学杂志》[ISSN:1005-202X/CN:44-1351/R]

卷:
38卷
期数:
2021年第6期
页码:
704-712
栏目:
医学影像物理
出版日期:
2021-06-29

文章信息/Info

Title:
U-Net automatic lung segmentation based on feature fusion
文章编号:
1005-202X(2021)06-0704-09
作者:
李雪12周金治12莫春梅12余玺12
1.西南科技大学信息工程学院, 四川 绵阳 621000; 2.特殊环境机器人技术四川省重点实验室, 四川 绵阳 621000
Author(s):
LI Xue1 2 ZHOU Jinzhi1 2 MO Chunmei1 2 YU Xi1 2
1. School of Information Engineering, Southwest University of Science and Technology, Mianyang 621000, China 2. Robot Technology Used for Special Environment Key Laboratory of Sichuan Province, Mianyang 621000, China
关键词:
肺实质U-Net自动分割颜色特征纹理特征特征融合
Keywords:
Keywords: lung parenchyma U-Net automatic segmentation color feature texture feature feature fusion
分类号:
R318
DOI:
DOI:10.3969/j.issn.1005-202X.2021.06.009
文献标志码:
A
摘要:
目的:将肺部颜色特征与纹理特征融合形成一种更有效的特征,并利用改进的U-Net深度学习网络结构对肺部CT影像进行图像分割以准确提取肺实质区域。方法:使用的CT影像数据来源于LIDC-IDRI数据库,首先通过色彩空间转换、高阶邻域统计的方法分别提取颜色特征和纹理特征,然后采用加权平均直方图融合两类特征,最后将特征输入改进后的U-Net模型,进行1 000次CT扫描测试,以达到完整的肺实质输出。结果:该方法最终的骰子系数、灵敏度、特异性分别为93%、96%和97%。结论:本方法较单一特征分割方法具有较高的分割精度,有效提高肺实质的分割精度,可为后续的肺部疾病自动诊断提供可靠基础,减少临床诊断的成本并节省医生诊断时间。
Abstract:
Abstract: Objective To obtain a type of more effective feature by combining lung color features with texture features, and to accurately extract lung parenchyma by segmenting the lung CT image using improved U-Net deep learning network structure. Methods The CT image data used in the study were derived from LIDC-IDRI dataset. The color features and texture features were firstly extracted through color space conversion and high-order neighborhood statistics. Then, the mean weighted histogram was used to fuse the two types of features and the obtained features were input into the improved U-Net model for 1 000 CT scan tests, thereby achieving a complete lung parenchyma output. Results The Dice coefficient, sensitivity and specificity of the proposed method were 93%, 96% and 97%, respectively. Conclusion The proposed method which has a higher segmentation accuracy than the single feature segmentation method can effectively improve the accuracy of lung parenchyma segmentation and provide a reliable basis for the subsequent automatic diagnosis of lung diseases, thus reducing the cost of clinical diagnosis and shortening the time for diagnosis.

相似文献/References:

[1]张文莉,吕晓琪,谷宇,等. 基于肺部CT图像中肺实质分割的研究进展[J].中国医学物理学杂志,2017,34(9):902.[doi:DOI:10.3969/j.issn.1005-202X.2017.09.009]
 [J].Chinese Journal of Medical Physics,2017,34(6):902.[doi:DOI:10.3969/j.issn.1005-202X.2017.09.009]
[2]秦楠楠,薛旭东,吴爱林,等.基于U-net卷积神经网络的宫颈癌临床靶区和危及器官自动勾画的研究[J].中国医学物理学杂志,2020,37(4):524.[doi:DOI:10.3969/j.issn.1005-202X.2020.04.023]
 QIN Nannan,XUE Xudong,WU Ailin,et al.Automatic segmentation of clinical target volumes and organs-at-risk in radiotherapy for cervical cancer using U-net convolutional neural network[J].Chinese Journal of Medical Physics,2020,37(6):524.[doi:DOI:10.3969/j.issn.1005-202X.2020.04.023]
[3]常艳奎,彭昭,周解平,等.基于U-net的心脏自动勾画模型的临床应用及改进[J].中国医学物理学杂志,2020,37(10):1218.[doi:DOI:10.3969/j.issn.1005-202X.2020.10.002]
 CHANG Yankui,PENG Zhao,ZHOU Jieping,et al.Clinical application and improvement of U-net-based model for automatic segmentation of the heart[J].Chinese Journal of Medical Physics,2020,37(6):1218.[doi:DOI:10.3969/j.issn.1005-202X.2020.10.002]
[4]董国亚,宋立明,李雅芬,等.基于深度学习的跨模态医学图像转换[J].中国医学物理学杂志,2020,37(10):1335.[doi:DOI:10.3969/j.issn.1005-202X.2020.10.021]
 DONG Guoya,SONG Liming,et al.Cross-modality medical image synthesis based on deep learning[J].Chinese Journal of Medical Physics,2020,37(6):1335.[doi:DOI:10.3969/j.issn.1005-202X.2020.10.021]
[5]董宇波,王蕊,赵慧娟,等.革兰氏染色细菌显微图像深度学习分类与计数[J].中国医学物理学杂志,2021,38(1):127.[doi:DOI:10.3969/j.issn.1005-202X.2021.01.020]
 DONG Yubo,WANG Rui,ZHAO Huijuan,et al.Classification and counting of Gram-stained bacteria by deeply learning in micro-image[J].Chinese Journal of Medical Physics,2021,38(6):127.[doi:DOI:10.3969/j.issn.1005-202X.2021.01.020]
[6]莫春梅,周金治,李雪,等.基于改进U-Net的肝脏分割方法[J].中国医学物理学杂志,2021,38(5):571.[doi:DOI:10.3969/j.issn.1005-202X.2021.05.009]
 MO Chunmei,ZHOU Jinzhi,et al.Liver segmentation method based on improved U-Net[J].Chinese Journal of Medical Physics,2021,38(6):571.[doi:DOI:10.3969/j.issn.1005-202X.2021.05.009]
[7]顾国浩,龙英文,吉明明.U-Net改进及其在新冠肺炎图像分割的应用[J].中国医学物理学杂志,2022,39(8):1041.[doi:DOI:10.3969/j.issn.1005-202X.2022.08.022]
 GU Guohao,LONG Yingwen,JI Mingming.Improved U-Net and its application in COVID-19 image segmentation[J].Chinese Journal of Medical Physics,2022,39(6):1041.[doi:DOI:10.3969/j.issn.1005-202X.2022.08.022]
[8]师文博,杨环,西永明,等.基于自注意力的双通路全脊柱 X 光图像分割模型[J].中国医学物理学杂志,2022,39(11):1385.[doi:DOI:10.3969/j.issn.1005-202X.2022.11.011]
 SHI Wenbo,YANG Huan,XI Yongming,et al.Self-attention based dual pathway network for spine segmentation in X-ray image[J].Chinese Journal of Medical Physics,2022,39(6):1385.[doi:DOI:10.3969/j.issn.1005-202X.2022.11.011]
[9]陈菁菁,李小霞,吕念祖.结合通道权重更新与密集残差金字塔空间注意力的皮肤病变分割方法[J].中国医学物理学杂志,2023,40(1):39.[doi:DOI:10.3969/j.issn.1005-202X.2023.01.007]
 CHEN Jingjing,LI Xiaoxia,L?Nianzu,et al.Skin lesion segmentation method combining channel weight update and dense residual pyramid spatial attention[J].Chinese Journal of Medical Physics,2023,40(6):39.[doi:DOI:10.3969/j.issn.1005-202X.2023.01.007]
[10]吴传锋,金鑫妍,白司悦,等.基于U-Net结合改进算法对放疗危及器官自动勾画研究[J].中国医学物理学杂志,2023,40(3):303.[doi:DOI:10.3969/j.issn.1005-202X.2023.03.008]
 WU Chuanfeng,JIN Xinyan,BAI Siyue,et al.Auto-segmentation of organs-at-risk for radiotherapy using U-Net combined with improved algorithms[J].Chinese Journal of Medical Physics,2023,40(6):303.[doi:DOI:10.3969/j.issn.1005-202X.2023.03.008]

备注/Memo

备注/Memo:
【收稿日期】2020-11-19 【基金项目】国家自然科学基金(11472297);西南科技大学研究生创新基金(20ycx0056) 【作者简介】李雪,硕士研究生,研究方向:医学图像处理、机器学习,E-mail: 15378336785@163.com 【通信作者】周金治,副教授,硕士生导师,主要研究方向:智能信息处理、计算机网络,E-mail: zhoujinzhi@swust.edu.cn
更新日期/Last Update: 2021-06-29