|Table of Contents|

Research on Image Feature Fusion Method for Pattern Image Retri(PDF)

《南京师大学报(自然科学版)》[ISSN:1001-4616/CN:32-1239/N]

Issue:
2022年02期
Page:
118-125
Research Field:
·计算机科学与技术·
Publishing date:

Info

Title:
Research on Image Feature Fusion Method for Pattern Image Retri
Author(s):
Yin YeyuGao JiaquanLi Ying
(School of Computer and Electronic Information/School of Artificial Intelligence,Nanjing Normal University,Nanjing 210023,China)
Keywords:
pattern imageimage retrifeature extractionfeature fusion
PACS:
TP301
DOI:
10.3969/j.issn.1001-4616.2022.02.015
Abstract:
With the development of textile CAD technology,pattern images increases quickly. Finding similar pattern images in the corporate library quickly and accurately can help textile companies greatly reduce costs and improve production efficiency. Aiming at the problem of pattern image retri,this paper uses ResNet as the backbone network to construct a pattern retri model PGLN(Pattern Global and Local Feature Network)based on feature fusion. In this model,the global feature uses the pooled feature map of the deep network to efficiently integrate the salient features of the input image; the local feature branch uses the attention mechanism and uses the interactive feature layer to detect the salient area of the image. In order to verify the effectiveness of the PGLN model,this paper tests the retri effect of the PGLN model on the self-built pattern dataset(Pattern). Experiment results show that compared with local feature extraction algorithms,global feature extraction algorithms,and fusion feature algorithms,the PGLN model achieves the best performance for the pattern image retri task.

References:

[1] FU X,ZHAO Y,WEI Y,et al. Rich features embedding for cross-modal retri:a simple baseline[J]. IEEE transactions on multimedia,2020,22(9):2354-2365.
[2]CEYHUN C,HASAN S B. Content based image retri with sparse representations and local feature descriptors:a comparative study[J]. Pattern recognition,2017,68:1-13.
[3]LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International journal of computer vision,2004,60(2):91-110.
[4]OLIVA A,TORRALBA A. Modeling the shape of the scene:a holistic representation of the spatial envelope[J]. International journal of computer vision,2001,42(3):145-175.
[5]DALAL N,TRIGGS B. Histograms of oriented gradients for human detection[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,San Diego,CA,USA:IEEE Computer Society,2005:886-893.
[6]KRIZHEVSKY A,SUTSKEVER I,HINTON G. ImageNet classification with deep convolutional neural networks[C]//The 26th Annual Conference on Neural Information Processing Systems 2012,Lake Tahoe,Nevada,USA,ACM,2012:1106-1114.
[7]SZEGEDY C,WEI L,JIA Y,et al. Going deeper with convolutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition,Boston,MA,USA:IEEE Computer Society,2015:1-9.
[8]SIMONYAN K,ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]//The 3rd International Conference on Learning Representations,San Diego,CA,USA:IEEE,2015:45-52.
[9]HE K,ZHANG X,REN S,et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition,Las Vegas,NV,USA:IEEE Computer Society,2016:770-778.
[10]HYEONWOO N,ANDRE A,JACK S,et al. Large-scale image retri with attentive deep local features[C]//2017 IEEE International Conference on Computer Vision,Venice,Italy:IEEE,2017:3476-3485.
[11]LIN T Y,DOLLAR P,GIRSHICK R,et al. Feature pyramid networks for object detection[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition,Honolulu,HI,USA:IEEE,2017:936-944.
[12]LIU Z,LUO P,QIU S,et al. Deep Fashion:powering robust clothes recognition and retri with rich annotations[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition,Las Vegas,NV,USA:IEEE,2016:1096-1104.
[13]KONG T,YAO A,CHEN Y,et al. HyperNet:towards accurate region proposal generation and joint object detection[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition,Las Vegas,NV,USA:IEEE,2016:845-853.

Memo

Memo:
-
Last Update: 1900-01-01