[1]张 帅,谢志华,牛杰一,等.基于对抗判别域适应的近红外与可见光异质人脸识别[J].南京师大学报(自然科学版),2020,43(04):95-103.[doi:10.3969/j.issn.1001-4616.2020.04.014]
 Zhang Shuai,Xie Zhihua,Niu Jieyi,et al.Near-infrared and Visible-light Image Heterogeneous Face RecognitionBased on Adversarial Domain Adaptation Learning[J].Journal of Nanjing Normal University(Natural Science Edition),2020,43(04):95-103.[doi:10.3969/j.issn.1001-4616.2020.04.014]
点击复制

基于对抗判别域适应的近红外与可见光异质人脸识别()
分享到:

《南京师大学报(自然科学版)》[ISSN:1001-4616/CN:32-1239/N]

卷:
第43卷
期数:
2020年04期
页码:
95-103
栏目:
·智慧应急信息技术·
出版日期:
2020-12-30

文章信息/Info

Title:
Near-infrared and Visible-light Image Heterogeneous Face RecognitionBased on Adversarial Domain Adaptation Learning
文章编号:
1001-4616(2020)04-0095-09
作者:
张 帅谢志华牛杰一李 毅
江西科技师范大学,江西省光电子与通信重点实验室,江西 南昌 330031
Author(s):
Zhang ShuaiXie ZhihuaNiu JieyiLi Yi
Key Laboratory of Optic-Electronic and Communication,Jiangxi Sciences and Technology Normal University,Nanchang 330031,China
关键词:
异质人脸识别无监督学习对抗学习域适应
Keywords:
heterogeneous face recognitionunsupervised learningadversarial learningdomain adaptation
分类号:
TP39
DOI:
10.3969/j.issn.1001-4616.2020.04.014
文献标志码:
A
摘要:
本文从对抗判别域适应的角度出发,利用无监督学习的方法缩小双模态图像之间的模态差异,提出了一种基于对抗判别域适应的近红外与可见光异质人脸识别方法. 首先,联合交叉熵和中心损失函数预训练了一个基于卷积神经网络的可见光人脸识别网络,赋予网络强的鉴别能力. 其次,利用对抗损失对抗地训练了一个网络结构一致的近红外人脸识别网络,使得两个网络提取的特征的数据分布一致,从而缩小模态之间的鸿沟. 最后,利用前一个网络提供的先验知识输出另一模态图像的后验概率. 实验结果显示提出的算法在不需要近红外人脸图像的标签信息和大规模的训练集的情况下,表现出了优良的性能.
Abstract:
The paper uses innovatively the unsupervised learning method to reduce the modal difference between bimodal images from the perspective of adaptive domain discrimination. A new near-infrared and visible-light heterogeneous face recognition method is proposed based on adversarial domain adaption learning. Firstly,we use cross-entropy and central loss function to jointly pre-train a full-convolution network,which gives the network a strong discriminating ability and provides the prior knowledge to another network. Then,another structurally consistent full convolutional network is trained by the adversarial loss,so that the data distribution of features extracted by the two network is consistent which narrowing the gap between the modalities. Finally,using the prior knowledge provided by the previous network,we can output the posterior probability of another modal images. Experimental results show that the proposed method can achieve the good performance,without requiring of the label information of near-infrared face image or large-scale training dataset.

参考文献/References:

[1] LIU X,SONG L,WU X,et al. Transferring deep representation for NIR-VIS heterogeneous face recognition[C]//2016 international conference on biometrics(ICB). IEEE:Halmstad,Sweden,2016.
[2]SAXENA S,VERBEEK J. Heterogeneous face recognition with CNNs[C]//European conference on computer vision. Springer,Cham,Switzerland,2016:483-491.
[3]HE R,WU X,SUN Z,et al. Wasserstein cnn:learning invariant features for nir-vis face recognition[J]. IEEE transactions on pattern analysis and machine intelligence,2019,41(7):1761-1773.
[4]REALE C,NASRABADI N M,KWON H,et al. Seeing the forest from the trees:a holistic approach to near-infrared heterogeneous face recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition workshops. New Jersey,USA:IEEE Press,2016:54-62.
[5]SONG L,ZHANG M,WU X,et al. Adversarial discriminative heterogeneous face recognition[C]//Thirty-second AAAI conference on artificial intelligence. New Orleans,USA,2018.
[6]王格格,郭涛,余游,等. 基于生成对抗网络的无监督域适应分类模型[J]. 电子学报,2020,48(6):1190-1197.
[7]TZENG E,HOFFMAN J,SAENKO K,et al. Adversarial discriminative domain adaptation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. New Jersey,USA:IEEE Press,2017:7167-7176.
[8]LARADJI I,BABANEZHAD R. M-ADDA:Unsupervised domain adaptation with deep metric learning[J]. arXiv preprint arXiv:1807.02552,2018.
[9]LE C Y,BOTTOU L,BENGIO Y,et al. Gradient-based learning applied to recognition[J]. Proceedings of the IEEE,1998,86(11):2278-2324.
[10]JIA X,JIN Y,SU X,et al. Domain-invariant representation learning using an unsupervised domain adversarial adaptation deep neural network[J]. Neurocomputing,2019,355(8):209-220.
[11]TZENG E,HOFFMAN J,ZHANG N,et al. Deep domain confusion:maximizing for domain invariance[J]. arXiv preprint arXiv:1412.3474,2014.
[12]GRETTON A,SMOLA A,HUANG J,et al. Covariate shift by kernel mean matching[J]. Dataset shift in machine learning,2009,3(4):5-12.
[13]LONG M,CAO Y,WANG J,et al. Learning transferable features with deep adaptation networks[J]. arXiv preprint arXiv:1502.02791,2015.
[14]SUN B,SAENKO K. Deep coral:correlation alignment for deep domain adaptation[C]//European conference on computer vision. Springer,Cham,Switzerland,2016:443-450.
[15]TZENG E,HOFFMAN J,DARRELL T,et al. Simultaneous deep transfer across domains and tasks[C]//Proceedings of the IEEE international conference on computer vision. New Jersey,USA:IEEE Press,2015:4068-4076.
[16]GANIN Y,LEMPITSKY V. Unsupervised domain adaptation by backpropagation[J]. arXiv preprint arXiv:1409.7495,2014.
[17]GHIFARY M,KLEIJN W B,ZHANG M,et al. Deep reconstruction-classification networks for unsupervised domain adaptation[C]//European conference on computer vision. Springer,Cham,Switzerland,2016:597-613.
[18]LIU M Y,TUZEL O. Coupled generative adversarial networks[C]//30th conference on neural information processing systems. Barcelona,Spain,2016:469-477.
[19]XU Y,ZHONG A,YANG J,et al. Bimodal biometrics based on a representation and recognition approach[J]. Optical engineering,2011,50(3):037202-037202-7. [责任编辑:陆炳新]

备注/Memo

备注/Memo:
收稿日期:2020-06-10.
基金项目:国家自然科学基金项目(61861020)、江西省教育厅科技项目(GJJ190578).
通讯作者:谢志华,博士,教授,研究方向:计算机视觉与模式识别. E-mail:xie_zhihua68@aliyun.com
更新日期/Last Update: 2020-11-15