[1]李启行,廖薇.基于注意力机制的生物医学文本分类模型[J].中国医学物理学杂志,2022,39(4):518-523.[doi:DOI:10.3969/j.issn.1005-202X.2022.04.023]
 LI Qihang,LIAO Wei.Biomedical text classification model based on attention mechanism[J].Chinese Journal of Medical Physics,2022,39(4):518-523.[doi:DOI:10.3969/j.issn.1005-202X.2022.04.023]
点击复制

基于注意力机制的生物医学文本分类模型()
分享到:

《中国医学物理学杂志》[ISSN:1005-202X/CN:44-1351/R]

卷:
39卷
期数:
2022年第4期
页码:
518-523
栏目:
医学人工智能
出版日期:
2022-04-27

文章信息/Info

Title:
Biomedical text classification model based on attention mechanism
文章编号:
1005-202X(2022)04-0518-06
作者:
李启行廖薇
上海工程技术大学电子电气工程学院, 上海 201620
Author(s):
LI Qihang LIAO Wei
School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
关键词:
生物医学文本注意力机制卷积神经网络循环神经网络文本分类
Keywords:
Keywords: biomedical text attention mechanism convolutional neural network recurrent neural network text classification
分类号:
R318;TP391
DOI:
DOI:10.3969/j.issn.1005-202X.2022.04.023
文献标志码:
A
摘要:
对生物医学文本进行准确分类,是促进医院信息化发展的一个重要途径。本研究提出一种基于注意力机制的双层次文本分类模型,用于对生物医学文本进行有效分类。该模型结合卷积神经网络与循环神经网络的优势,对用户输入的疾病文本进行特征提取。首先,在第一层次通过Bi-GRU通道与Bi-LSTM通道提取文本中的上下文关联信息,同时,为增强模型的特征提取能力,在该层次引入注意力机制。其次,将两个通道提取到的时序特征进行特征拼接,并将拼接后的结果传入第二层次,从而进一步提取文本的局部特征,最后利用分类器输出最终的分类结果。对生物医学文本进行分类性能评估,结果表明,与基线模型相比,该模型的分类准确率可达91.45%,具有显著的分类性能。
Abstract:
Abstract: The accurate classification of biomedical texts is an important way to promote the development of hospital information. Herein a two-level text classification model based on attentional mechanism is proposed to effectively classify biomedical texts. The model combines the advantages of convolutional neural network and recurrent neural network to extract features from the disease texts input by users. The context association information in the text is firstly extracted through Bi-GRU channel and Bi-LSTM channel at the first level, and meanwhile, in order to enhance the feature extraction ability of the model, attention mechanism is introduced to this level. After the time series features extracted from the two channels are spliced-together, the spliced-result is input to the second level for further extracting the local features from the text. The final classification results are output by the classifier. The evaluation of the performances in classifying biomedical texts show that compared with baseline models, the proposed model achieves a classification accuracy of 91.45%, with a significant classification performance.

相似文献/References:

[1]蒋家良,罗勇,何奕松,等.特征区域再聚焦提升全卷积神经网络勾画较小靶区准确度[J].中国医学物理学杂志,2020,37(1):75.[doi:DOI:10.3969/j.issn.1005-202X.2020.01.015]
 JIANG Jialiang,LUO Yong,HE Yisong,et al.Feature area refocusing for improving the accuracy of small target area segmentations by fully convolutional networks[J].Chinese Journal of Medical Physics,2020,37(4):75.[doi:DOI:10.3969/j.issn.1005-202X.2020.01.015]
[2]周意龙,卫子然,蔡清萍,等.基于卷积神经网络胃癌分割与T分期算法[J].中国医学物理学杂志,2022,39(2):215.[doi:DOI:10.3969/j.issn.1005-202X.2022.02.015]
 ZHOU Yilong,WEI Ziran,CAI Qingping,et al.Gastric cancer segmentation and T staging algorithm based on convolutional neural network[J].Chinese Journal of Medical Physics,2022,39(4):215.[doi:DOI:10.3969/j.issn.1005-202X.2022.02.015]
[3]江悦莹,施一萍,翁晓俊,等.融合Vnet和边缘特征的肺结节分割算法[J].中国医学物理学杂志,2022,39(6):705.[doi:DOI:10.3969/j.issn.1005-202X.2022.06.009]
 JIANG Yueying,SHI Yiping,WENG Xiaojun,et al.Lung nodule segmentation algorithm integrating Vnet and boundary features[J].Chinese Journal of Medical Physics,2022,39(4):705.[doi:DOI:10.3969/j.issn.1005-202X.2022.06.009]
[4]刘雲,王一达,张成秀,等.基于深度学习结合解剖学注意力机制的肺结节良恶性分类[J].中国医学物理学杂志,2022,39(11):1441.[doi:DOI:10.3969/j.issn.1005-202X.2022.11.019]
 LIU Yun,WANG Yida,ZHANG Chengxiu,et al.Classification of benign and malignant pulmonary nodules by deep learning with anatomy-based attention mechanism[J].Chinese Journal of Medical Physics,2022,39(4):1441.[doi:DOI:10.3969/j.issn.1005-202X.2022.11.019]
[5]陈菁菁,李小霞,吕念祖.结合通道权重更新与密集残差金字塔空间注意力的皮肤病变分割方法[J].中国医学物理学杂志,2023,40(1):39.[doi:DOI:10.3969/j.issn.1005-202X.2023.01.007]
 CHEN Jingjing,LI Xiaoxia,L?Nianzu,et al.Skin lesion segmentation method combining channel weight update and dense residual pyramid spatial attention[J].Chinese Journal of Medical Physics,2023,40(4):39.[doi:DOI:10.3969/j.issn.1005-202X.2023.01.007]
[6]王振华,刘阳星,赵晓雨,等.结合上下文和注意力机制改进的视盘分割模型[J].中国医学物理学杂志,2023,40(1):47.[doi:DOI:10.3969/j.issn.1005-202X.2023.01.008]
 WANG Zhenhua,LIU Yangxing,ZHAO Xiaoyu,et al.Optic disc segmentation model improved by contextual information and attention mechanism[J].Chinese Journal of Medical Physics,2023,40(4):47.[doi:DOI:10.3969/j.issn.1005-202X.2023.01.008]
[7]邸敬,马帅,王国栋,等.基于改进Unet与动态阈值可变FCMSPCNN的医学图像分割[J].中国医学物理学杂志,2023,40(3):328.[doi:DOI:10.3969/j.issn.1005-202X.2023.03.011]
 DI Jing,MA Shuai,WANG Guodong,et al.Medical image segmentation using improved Unet combined with dynamic threshold changed FCMSPCNN[J].Chinese Journal of Medical Physics,2023,40(4):328.[doi:DOI:10.3969/j.issn.1005-202X.2023.03.011]
[8]魏坤,沈记全,赵艳梅.MAUNet:用于皮肤病变分割的轻量级模型[J].中国医学物理学杂志,2023,40(5):555.[doi:DOI:10.3969/j.issn.1005-202X.2023.05.006]
 WEI Kun,SHEN Jiquan,ZHAO Yanmei.MAUNet: a lightweight model for skin lesion segmentation[J].Chinese Journal of Medical Physics,2023,40(4):555.[doi:DOI:10.3969/j.issn.1005-202X.2023.05.006]
[9]洪启帆,玄祖兴,李雅馨.基于全卷积神经网络的低剂量CT去噪算法[J].中国医学物理学杂志,2023,40(6):695.[doi:DOI:10.3969/j.issn.1005-202X.2023.06.005]
 HONG Qifan,XUAN Zuxing,LI Yaxin.Fully convolutional neural network based algorithm for low-dose CT image denoising[J].Chinese Journal of Medical Physics,2023,40(4):695.[doi:DOI:10.3969/j.issn.1005-202X.2023.06.005]
[10]孟延宗,李小霞,周颖玥,等.基于上下文特征感知和双频上采样的食管早癌图像分割[J].中国医学物理学杂志,2023,40(8):957.[doi:DOI:10.3969/j.issn.1005-202X.2023.08.006]
 MENG Yanzong,LI Xiaoxia,ZHOU Yingyue,et al.Early esophageal cancer image segmentation based on contextual feature awareness and dual frequency upsampling[J].Chinese Journal of Medical Physics,2023,40(4):957.[doi:DOI:10.3969/j.issn.1005-202X.2023.08.006]

备注/Memo

备注/Memo:
【收稿日期】2021-11-05 【基金项目】国家自然科学基金(62001282) 【作者简介】李启行,硕士研究生,研究方向:自然语言处理,E-mail: 599510114@qq.com 【通信作者】廖薇,博士,副教授,研究方向:生物医疗与自然语言处理,E-mail: liaowei54@126.com
更新日期/Last Update: 2022-04-27