[1]杨琬琪,周子奇,郭心娜.注意力机制引导的多模态心脏图像分割[J].南京师范大学学报(自然科学版),2019,42(03):27-31.[doi:10.3969/j.issn.1001-4616.2019.03.004]
 Yang Wanqi,Zhou Ziqi,Guo Xinna.Attention-Guided Multimodal Cardiac Segmentation[J].Journal of Nanjing Normal University(Natural Science Edition),2019,42(03):27-31.[doi:10.3969/j.issn.1001-4616.2019.03.004]
点击复制

注意力机制引导的多模态心脏图像分割()
分享到:

《南京师范大学学报》(自然科学版)[ISSN:1001-4616/CN:32-1239/N]

卷:
第42卷
期数:
2019年03期
页码:
27-31
栏目:
·全国机器学习会议论文专栏·
出版日期:
2019-09-30

文章信息/Info

Title:
Attention-Guided Multimodal Cardiac Segmentation
文章编号:
1001-4616(2019)03-0027-05
作者:
杨琬琪周子奇郭心娜
南京师范大学计算机科学与技术学院,江苏 南京 210023
Author(s):
Yang Wanqi1Zhou Ziqi1Guo Xinna1
School of Computer Science and Technology,Nanjing Normal University,Nanjing 210023,China
关键词:
注意力机制多模态心脏图像分割半孪生网络跨模态图像生成
Keywords:
attentionmultimodal cardiac segmentationsemi-siamese networkcross-modal image generation
分类号:
TP391.4
DOI:
10.3969/j.issn.1001-4616.2019.03.004
文献标志码:
A
摘要:
为有效挖掘模态间共享与模态特有的信息,本文提出一种注意力机制引导的半孪生网络,用于分割多模态(MRI与CT)心脏图像. 具体地,首先运用循环一致的生成对抗网络(CycleGAN)进行双向的图像生成(即从MRI到CT以及从CT到MRI),这样可以解决模态间心脏图像不配对的问题; 其次,设计一个新的半孪生网络,将原始的CT(或MR)图像及其生成的MR(或CT)图像进行配对并同时输入,先通过两个编码器(encoders)分别学习模态特有的特征,再经过一个跨模态的注意力模块将不同模态的特征进行融合,最后输入一个公共的解码器(decoder)来得到模态共享的特征,用于心脏图像分割. 上述学习过程是端到端的方式进行训练. 本文将所提方法在真实的CT与MR不配对的心脏图像数据集上进行实验评估,表明所提方法的分割精度超出基准方法.
Abstract:
With the goal of leveraging the modal-shareable and modal-specific information during cross-modal segmentation,we propose a novel cross-modal attention-guided semi-Siamese network for joint cardiac segmentation from MR and CT images. In particular,we first employed the cycle-consistency generative adversarial networks to complete the bidirectional image generation(i.e.,MR to CT,CT to MR)to help reduce the modal-level inconsistency. Then,with the generated and original MR and CT images,a novel semi-Siamese network is utilized where 1)two encoders learn modal-specific features separately and 2)a common decoder makes full use of modal-shareable information from different modalities for a final consistent segmentation. Also,we implement the cross-modal attention to incorporate these shareable and specific information,and our model can be trained in an end-to-end manner. With extensive evaluation on the unpaired CT and MR cardiac images,our method outperforms the baselines in terms of the segmentation performance.

参考文献/References:

[1] Benjamin E J,Muntner P,Alonso A,et al. Heart disease and stroke statistics 2019 update:a report from the American heart association[J]. Circulation,2019,139(10):56-528.
[2]Ruan Y,Guo Y,Zheng Y,et al. Cardiovascular disease(CVD)and associated risk factors among older adults in six low-and middle-income countries:results from SAGE wave 1[J]. BMC public health,2018,18(1):778.
[3]Zhang Z,Xie Y,Xing F,et al. MDNet:a semantically and visually interpretable medical image diagnosis network[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR),IEEE,Honolulu,HI,2017:3549-3557.
[4]Zhu J,Park T,Isola P,et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//IEEE International Conference on Computer Vision(ICCV),IEEE,Venice,2017:2242-2251.
[5]Zagoruyko S,Komodakis N. Paying more attention to attention:improving the performance of convolutional neural networks via attention transfer[J]. arXiv preprint arXiv:1612.03928,2016.
[6]Zhuang X,Shen J. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI[J]. Medical image analysis,2016,31:77-87.
[7]Dou Q,Ouyang C,Chen C,et al. Unsupervised cross-modality domain adaptation of ConvNets for biomedical image segmentations with adversarial loss[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence. AAAI Press,Stockholm,2018:691-697.
[8]Zhang Z,Yang L,Zheng Y. Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR),IEEE,Salt Lake City,UT,2018:9242-9251.
[9]Ronneberger O,Fischer P,Brox T. U-Net:convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computer Assisted Intervention(MICCAI),Springer,Cham,Munich,2015:234-241.
[10]Hong S,Oh J,Han B,et al. Learning transferrable knowledge for semantic segmentation with deep convolutional neural network[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR),IEEE,Las Vegas,NV,2016:3204-3212.
[11]Kingma D P,Ba J. Adam:a method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980,2014.
[12]http://www.itksnap.org/pmwiki/pmwiki.php.
[13]Shelhamer E,Long J,Darrell T. Fully convolutional networks for semantic segmentation[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR),IEEE,Boston,MA,2015:3431-3440.

相似文献/References:

[1]张旭辉,张 郴,李雅南,等.城市旅游餐饮体验的注意力机制模型建构——基于机器学习的网络文本深度挖掘[J].南京师范大学学报(自然科学版),2022,45(01):32.[doi:10.3969/j.issn.1001-4616.2022.01.006]
 Zhang Xuhui,Zhang Chen,Li Yanan,et al.Construction of Attention Mechanism Model of Urban Tourism Catering Experience:Deep Mining of Online Text Based on Machine Learning[J].Journal of Nanjing Normal University(Natural Science Edition),2022,45(03):32.[doi:10.3969/j.issn.1001-4616.2022.01.006]
[2]梁兵涛,倪云峰.基于集成学习的中文命名实体识别方法[J].南京师范大学学报(自然科学版),2022,45(03):123.[doi:10.3969/j.issn.1001-4616.2022.03.016]
 Liang Bingtao,Ni Yunfeng.Chinese Named Entity Recognition Method Based on Ensemble Learning[J].Journal of Nanjing Normal University(Natural Science Edition),2022,45(03):123.[doi:10.3969/j.issn.1001-4616.2022.03.016]
[3]周湘贞,李 帅,隋 栋.基于深度学习和注意力机制的微博情感分析[J].南京师范大学学报(自然科学版),2023,46(02):115.[doi:10.3969/j.issn.1001-4616.2023.02.015]
 Zhou Xiangzhen,Li Shuai,Sui Dong.Microblog Emotion Analysis Based on Deep Learning and Attention Mechanism[J].Journal of Nanjing Normal University(Natural Science Edition),2023,46(03):115.[doi:10.3969/j.issn.1001-4616.2023.02.015]
[4]张文娟,张 彬,杨皓哲.基于双注意力机制的成绩预测[J].南京师范大学学报(自然科学版),2023,46(04):103.[doi:10.3969/j.issn.1001-4616.2023.04.014]
 Zhang Wenjuan,Zhang Bin,Yang Haozhe.Performance Prediction based on Dual-Attention Mechanism[J].Journal of Nanjing Normal University(Natural Science Edition),2023,46(03):103.[doi:10.3969/j.issn.1001-4616.2023.04.014]

备注/Memo

备注/Memo:
收稿日期:2019-06-24.基金项目:国家自然科学基金(61603193,61876087). 通讯联系人:杨琬琪,博士,研究方向:机器学习,深度学习,医学图像处理. E-mail:yangwq@njnu.edu.cn
更新日期/Last Update: 2019-09-30