[1]王文静,牛四杰,李帆,等.通过边缘引导的肾上腺三维CT影像分割[J].南京师大学报(自然科学版),2025,48(01):93-99.[doi:10.3969/j.issn.1001-4616.2025.01.012]
 Wang Wenjing,Niu Sijie,Li Fan,et al.Edge Guided 3D CT Image Segmentation of Adrenal Gland[J].Journal of Nanjing Normal University(Natural Science Edition),2025,48(01):93-99.[doi:10.3969/j.issn.1001-4616.2025.01.012]
点击复制

通过边缘引导的肾上腺三维CT影像分割()
分享到:

《南京师大学报(自然科学版)》[ISSN:1001-4616/CN:32-1239/N]

卷:
48
期数:
2025年01期
页码:
93-99
栏目:
计算机科学与技术
出版日期:
2025-02-15

文章信息/Info

Title:
Edge Guided 3D CT Image Segmentation of Adrenal Gland
文章编号:
1001-4616(2025)01-0093-07
作者:
王文静1牛四杰1李帆2曹彩霞3丛文斌3杨自成3
(1.济南大学信息科学与工程学院,山东 济南 250000)
(2.广州柏视医疗科技有限公司,广东 广州 510530)
(3.青岛大学附属医院,山东 青岛 266000)
Author(s):
Wang Wenjing1Niu Sijie1Li Fan2Cao Caixia3Cong Wenbin3Yang Zicheng3
(1.College of Information Science and Engineering,University of Jinan,Jinan 250000,China)
(2.Perception Vision Medical Technologies Co.,Ltd.,Guangzhou 510530,China)
(3.The Affiliated Hospital of Qingdao University,Qingdao 266000,China)
关键词:
全卷积TransformerMedNeXt类别不均衡体积分割
Keywords:
full convolutionTransformerMedNeXtunbalanced sample categoriesvolume segmentation
分类号:
TP391
DOI:
10.3969/j.issn.1001-4616.2025.01.012
文献标志码:
A
摘要:
计算机断层扫描图像是判断肾脏情况的主要成像方式. 医生可以通过分割出腹部CT图像中感兴趣的肾上腺区域,从而计算出肾上腺的体积、灰度值和表面积来判断肾病的病因. 然而手工标记图像中的病变区域是耗时、繁琐且具有挑战性的,且病变区域与周围组织极为相似,勾画出的边界极为模糊. 因此本文采用一种全卷积神经网络模型MedNeXt——一个受Transformer启发的大核分割网络来对肾上腺3D数据进行体积分割. 为应对样本类别不均衡问题,本文还使用对称统一焦点损失替换Dice损失,以提高分割精度. 同时考虑到肾上腺组织与周围组织边界难以区分的问题,本文提出结合边界损失函数与主体损失函数同时监督分割过程,使得模型更关注边界的细节信息,从而提升模型性能,实现更精确的分割结果. 实验结果表明,所用方法与近几年最新的模型相比在本文所用肾上腺3D数据集上实现了最先进的性能.
Abstract:
Computed tomography image is the main imaging method to judge the condition of kidney. Doctors can determine the cause of kidney disease by segmenting the adrenal region of interest in the abdominal CT image and calculating the volume,gray value and surface area of the adrenal gland. However,it is time-consuming,tedious and challenging to manually mark the lesion area in the image,and the lesion area is very similar to the surrounding tissue,and the boundary outlined is extremely fuzzy. Therefore,the method adopted in this paper uses a full convolution neural network model MedNeXt — a transformer inspired large core segmentation network to perform volume segmentation on 3D adrenal data. In order to deal with the problem of unbalanced sample categories,this paper also uses symmetrical unified focus loss to replace Dice loss to improve segmentation accuracy. At the same time,considering the problem that it is difficult to distinguish between adrenal tissue and surrounding tissue boundaries,this paper proposes to combine the boundary loss function and the main body loss function to simultaneously monitor the segmentation process,so that the model pays more attention to the details of the boundary,thus improving the model performance and achieving more accurate segmentation results. Finally,experiments show that the method used in this paper achieves the most advanced performance on the adrenal 3D dataset compared with the latest models in recent years.

参考文献/References:

[1]GOODFELLOW I,BENGIO Y,COURVILLE A,et al. Deep Learning[M]. Cambridge,UK:MIT Press,2016.
[2]RONNEBERGER O,FISCHER P,BROX T. U-net:convolutional networks for biomedical image segmentation[C]//Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention,Munich,Germany:Springer,2015:234-241.
[3]LI B,LIU S K,WU F,et al. RT-Unet:an advanced network based on residual network and transformer for medical image segmentation[J]. International journal of intelligent systems,2022,37(11):8565-8582.
[4]ZHOU Z W,SIDDIQUEE M M R,TAJBAKHSH N,et al. UNet++:a nested U-net architecture for medical image segmentation[C]//Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Granada,Spain:Springer International Publishing,2018:3-11.
[5]MILLETARI F,NAVAB N,AHMADI S A. V-Net:fully convolutional neural networks for volumetric medical image segmentation[C]//Proceedings of the 2016 Fourth International Conference on 3D Vision(3DV),Stanford,CA:IEEE,2016:565-571.
[6]ISENSEE F,PETERSEN J,KLEIN A,et al. NNU-Net:self-adapting framework for U-Net-based medical image segmentation[J/OL]. arXiv Preprint arXiv:1809.10486,2018.
[7]VASWANI A,SHAZEER N,PARMAR N,et al. Attention is all you need[J]. Advances in neural information processing systems,2017:5998-6008.
[8]DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al. An image is worth 16×16 words:transformers for image recognition at scale[J/OL]. arXiv Preprint arXiv:2010.11929,2020.
[9]LIU Z,LIN Y,CAO Y,et al. Swin transformer:hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal,QC,Canada:IEEE,2021:10012-10022.
[10]LIU Z,MAO H,WU C Y,et al. A convnet for the 2020s[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans,LA:IEEE 2022:11976-11986.
[11]ROY S,KOEHLER G,ULRICH C,et al. Mednext:transformer-driven scaling of convnets for medical image segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Vancouver,Canada:Springer,2023:405-415.
[12]XIAO X,LIAN S,LUO Z,et al. Weighted res-unet for high-quality retina vessel segmentation[C]//International Conference on Information Technology in Medicine and Education(ITME). Hangzhou,China:IEEE,2018:327-331.
[13]OKTAY O,SCHLEMPER J,FOLGOC L L,et al. Attention U-Net:learning where to look for the pancreas[C]//Proceedings of the 21th International Conference Medical Image Computing and Computer Assisted Intervention(MICCAI). Granada,Spain:Springer International Publishing,2018:564-572.
[14]蒋婷,李晓宁.采用多尺度视觉注意力分割腹部CT和心脏MR图像[J]. 中国图像图形学报,2024,29(1):268-279.
[15]HATAMIZADEH A,TANG Y,NATH V,et al. Unetr:transformers for 3d medical image segmentation[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa,HI:IEEE,2022:574-584.
[16]CHEN J,LU Y,YU Q,et al. Transunet:transformers make strong encoders for medical image segmentation[J/OL]. arXiv Preprint arXiv:2102.04306,2021.
[17]ZHANG Z,FU H,DAI H,et al. Et-net:a generic edge-attention guidance network for medical image segmentation[C]//Medical Image Computing and Computer Assisted Intervention-MICCAI 2019:22nd International Conference,Shenzhen,China:Springer International Publishing,2019:442-450.
[18]YANG J,JIAO L,SHANG R,et al. Ept-net:edge perception transformer for 3d medical image segmentation[J]. IEEE transactions on mmedical imaging,2023,42(11):3229-3243.
[19]VALANARAS J M J,SINDAGI V A,HACIHALILOGL I,et al. Kiu-net:overcomplete convolutional architectures for biomedical image and volumetric segmentation[J]. IEEE transactions on medical imaging,2021,41(4):965-976.
[20]LEE H J,KIM J U,LEE S,et al. Structure boundary preserving segmentation for medical image with ambiguous boundary[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle,WA,USA:IEEE,2020:4817-4826.
[21]MANZAR O N,KALEYBAR J M,SAADAT H,et al. BEFUnet:a hybrid CNN-Transformer architecture for precise medical image segmentation[J/OL]. arXiv Preprint arXiv:2402.08793,2024.
[22]RADFORD A,KIM J W,HALLACY C,et al. Learning transferable visual models from natural language supervision[C]//International Conference on Machine Learning. San Fracisco,CA,USA:PMLR,2021:8748-8763.
[23]KIRILLOV A,MINTUN E,RAVI N,et al. Segment anything[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Paris,France:IEEE,2023:4015-4026.
[24]YEUNG M,SALA E,SCHÖNLIEB C B,et al. Unified focal loss:generalising dice and cross entropy-based losses to handle class imbalanced medical image segmentation[J]. Computerized medical imaging and graphics,2022,95:102026.
[25]PANG Y,LIANG J,HUANG T,et al. Slim UNETR:scale hybrid transformers to efficient 3D medical image segmentation under limited computational resources[J]. IEEE transactions on medical imaging,2024,43(3):994-1005.

相似文献/References:

[1]刘海宏,刘 敏,朱岸青.基于改进Transformer时序算法的区域经济预测[J].南京师大学报(自然科学版),2024,47(04):118.[doi:10.3969/j.issn.1001-4616.2024.04.013]
 Liu Haihong,Liu Min,Zhu Anqing.Regional Economic Forecasting Based on Improved Transformer Sequence Algorithm[J].Journal of Nanjing Normal University(Natural Science Edition),2024,47(01):118.[doi:10.3969/j.issn.1001-4616.2024.04.013]

备注/Memo

备注/Memo:
收稿日期:2024-06-07.
基金项目:国家自然科学基金项目(62101213、62103165、62302191)、山东省高等学校人才引育创新团队项目.
通讯作者:牛四杰,博士,教授,研究方向:模式识别,医学影像分析. E-mail:sjniu@hotmail.com
更新日期/Last Update: 2025-02-15