[1]殷业瑜,高家全,李 莹.面向印花图案检索的特征融合方法研究[J].南京师大学报(自然科学版),2022,45(02):118-125.[doi:10.3969/j.issn.1001-4616.2022.02.015]
 Yin Yeyu,Gao Jiaquan,Li Ying.Research on Image Feature Fusion Method for Pattern Image Retri[J].Journal of Nanjing Normal University(Natural Science Edition),2022,45(02):118-125.[doi:10.3969/j.issn.1001-4616.2022.02.015]
点击复制

面向印花图案检索的特征融合方法研究()
分享到:

《南京师大学报(自然科学版)》[ISSN:1001-4616/CN:32-1239/N]

卷:
第45卷
期数:
2022年02期
页码:
118-125
栏目:
·计算机科学与技术·
出版日期:
2022-05-15

文章信息/Info

Title:
Research on Image Feature Fusion Method for Pattern Image Retri
文章编号:
1001-4616(2022)02-0118-08
作者:
殷业瑜高家全李 莹
(南京师范大学计算机与电子信息学院/人工智能学院,江苏 南京 210023)
Author(s):
Yin YeyuGao JiaquanLi Ying
(School of Computer and Electronic Information/School of Artificial Intelligence,Nanjing Normal University,Nanjing 210023,China)
关键词:
印花图案图像检索特征提取特征融合
Keywords:
pattern imageimage retrifeature extractionfeature fusion
分类号:
TP301
DOI:
10.3969/j.issn.1001-4616.2022.02.015
文献标志码:
A
摘要:
伴随着纺织品CAD技术的发展,印花图案数量快速增长. 快速准确地在企业图库中查找到相似印花图案,对于帮助纺织企业极大程度降低成本,提升生产效率,具有重要意义. 本文针对印花图案检索问题,以ResNet为主干网络构建了一个基于特征融合的印花图案检索模型PGLN(Pattern Global and Local feature Network). 在该模型中,将全局特征和局部特征进行融合,全局特征使用深度网络的池化特征图,高效地整合输入图像的显著特征; 局部特征分支借由注意力机制,使用交互特征层来检测图像的显著性区域. 为了验证PGLN模型的有效性,本文在自主构建的印花图案数据集(Pattern)上对PGLN模型的检索效果进行了测试. 实验表明,与局部特征提取算法、全局特征提取算法以及融合特征算法相比,PGLN模型在Pattern数据集检索任务上均取得了最优的表现.
Abstract:
With the development of textile CAD technology,pattern images increases quickly. Finding similar pattern images in the corporate library quickly and accurately can help textile companies greatly reduce costs and improve production efficiency. Aiming at the problem of pattern image retri,this paper uses ResNet as the backbone network to construct a pattern retri model PGLN(Pattern Global and Local Feature Network)based on feature fusion. In this model,the global feature uses the pooled feature map of the deep network to efficiently integrate the salient features of the input image; the local feature branch uses the attention mechanism and uses the interactive feature layer to detect the salient area of the image. In order to verify the effectiveness of the PGLN model,this paper tests the retri effect of the PGLN model on the self-built pattern dataset(Pattern). Experiment results show that compared with local feature extraction algorithms,global feature extraction algorithms,and fusion feature algorithms,the PGLN model achieves the best performance for the pattern image retri task.

参考文献/References:

[1] FU X,ZHAO Y,WEI Y,et al. Rich features embedding for cross-modal retri:a simple baseline[J]. IEEE transactions on multimedia,2020,22(9):2354-2365.
[2]CEYHUN C,HASAN S B. Content based image retri with sparse representations and local feature descriptors:a comparative study[J]. Pattern recognition,2017,68:1-13.
[3]LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International journal of computer vision,2004,60(2):91-110.
[4]OLIVA A,TORRALBA A. Modeling the shape of the scene:a holistic representation of the spatial envelope[J]. International journal of computer vision,2001,42(3):145-175.
[5]DALAL N,TRIGGS B. Histograms of oriented gradients for human detection[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,San Diego,CA,USA:IEEE Computer Society,2005:886-893.
[6]KRIZHEVSKY A,SUTSKEVER I,HINTON G. ImageNet classification with deep convolutional neural networks[C]//The 26th Annual Conference on Neural Information Processing Systems 2012,Lake Tahoe,Nevada,USA,ACM,2012:1106-1114.
[7]SZEGEDY C,WEI L,JIA Y,et al. Going deeper with convolutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition,Boston,MA,USA:IEEE Computer Society,2015:1-9.
[8]SIMONYAN K,ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]//The 3rd International Conference on Learning Representations,San Diego,CA,USA:IEEE,2015:45-52.
[9]HE K,ZHANG X,REN S,et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition,Las Vegas,NV,USA:IEEE Computer Society,2016:770-778.
[10]HYEONWOO N,ANDRE A,JACK S,et al. Large-scale image retri with attentive deep local features[C]//2017 IEEE International Conference on Computer Vision,Venice,Italy:IEEE,2017:3476-3485.
[11]LIN T Y,DOLLAR P,GIRSHICK R,et al. Feature pyramid networks for object detection[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition,Honolulu,HI,USA:IEEE,2017:936-944.
[12]LIU Z,LUO P,QIU S,et al. Deep Fashion:powering robust clothes recognition and retri with rich annotations[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition,Las Vegas,NV,USA:IEEE,2016:1096-1104.
[13]KONG T,YAO A,CHEN Y,et al. HyperNet:towards accurate region proposal generation and joint object detection[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition,Las Vegas,NV,USA:IEEE,2016:845-853.

备注/Memo

备注/Memo:
基金项目:国家自然科学基金项目(61872422).
通讯作者:高家全,博士,教授,博士生导师,研究方向:高性能计算,大数据分析与可视化. E-mail:springf12@163.com
更新日期/Last Update: 1900-01-01