参考文献/References:
[1] LI H,SHEN C. Reading car license plates using deep convolutional neural networks and LSTMs[EB/OL]. [2019-06-01]. https://arxiv.org/abs/1601.05610v1.
[2]SHI B,BAI X,YAO C. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition[J]. IEEE transactions on pattern analysis and machine intelligence,2017,39(11):2298-2304.
[3]ZHU J Y,PARK T,ISOLA P,et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision. Venice:IEEE Computer Society,2017:2223-2232.
[4]GOU C,WANG K,YAO Y,et al. Vehicle license plate recognition based on extremal regions and restricted Boltzmann machines[J]. IEEE transactions on intelligent transportation systems,2016,17(4):1096-1107.
[5]SHIN H C,TENENHOLTZ N A,ROGERS J K,et al. Medical image synthesis for data augmentation and anonymization using generative adversarial networks[C]//International Workshop on Simulation and Synthesis in Medical Imaging. Berlin:Springer,2018.
[6]NOMURA S,YAMANAKA K,KATAI O,et al. A novel adaptive morphological approach for degraded character image segmentation[J]. Pattern recognition,2005,38(11):1961-1975.
[7]SARFRAZ M,AHMED M J. Exploring critical approaches of evolutionary computation:an approach to license plate recognition system using neural network[M]. Hershey:IGI Global,2019:20-36.
[8]JIAO J,YE Q,HUANG Q. A configurable method for multi-style license plate recognition[J]. Pattern recognition,2009,42(3):358-369.
[9]NAIR A S,RAJU S,HARIKRISHNAN K J,et al. A survey of techniques for license plate detection and recognition[J]. Imanager’s journal on image processing,2018,5(1):25.
[10]WELANDER P,KARLSSON S,EKLUND A. Generative adversarial networks for image-to-image translation on multi-contrast MR images:a comparison of CycleGAN and UNIT[EB/OL]. [2019-06-01]. https://arxiv.org/abs/1806.07777.
[11]LLORENS D,MARZAL A,PALAZóN V,et al. Car license plates extraction and recognition based on connected components analysis and HMM decoding[C]//Iberian Conference on Pattern Recognition and Image Analysis. Berlin:Springer,2005:571-578.
[12]WEN Y,LU Y,YAN J,et al. An algorithm for license plate recognition applied to intelligent transportation system[J]. IEEE transactions on intelligent transportation systems,2011,12(3):830-845.
[13]TAO D,LIN X,JIN L,et al. Principal component 2-D long short-term memory for font recognition on single Chinese characters[J]. IEEE transactions on cybernetics,2016,46(3):756-765.
[14]GOODFELLOW I,POUGET A J,MIRZA M,et al. Generative adversarial nets[J]. Advances in neural information processing systems,2014,27:2672-2680.
[15]RADFORD A,METZ L,CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[EB/OL]. [2019-06-01]. https://arxiv.org/abs/1511.06434.
[16]REED S,AKATA Z,YAN X,et al. Generative adversarial text to image synthesis[EB/OL]. [2019-06-01]. https://arxiv.org/abs/1605.05396.
[17]ISOLA P,ZHU J Y,ZHOU T,et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hawaii:IEEE Computer Society,2017:1125-1134.
[18]ARJOVSKY M,CHINTALA S,BOTTOU L. Wasserstein generative adversarial networks[J]. Proceedings of machine learning research,2017,70:214-223.
[19]PATHAK D,KRAHENBUHL P,DONAHUE J,et al. Context encoders:feature learning by inpainting[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas:IEEE Computer Society,2016:2536-2544.
[20]SALIMANS T,GOODFELLOW I,ZAREMBA W,et al. Improved techniques for training gans[J]. Advances in neural information processing systems,2016,29:2234-2242.
[21]WU J,ZHANG C,XUE T,et al. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling[J]. Advances in neural information processing systems,2016 29:82-90.
[22]GUPTA A,VEDALDI A,ZISSERMAN A. Synthetic data for text localisation in natural images[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas:IEEE Computer Society,2016:2315-2324.
[23]JADERBERG M,SIMONYAN K,VEDALDI A,et al. Synthetic data and artificial neural networks for natural scene text recognition[EB/OL]. [2019-06-01]. https://arxiv.org/abs/1406.2227.
[24]WANG Z,YANG J,JIN H,et al. Deepfont:identify your font from an image[C]//Proceedings of the 23rd ACM International Conference on Multimedia. Brisbane:ACM,2015:451-459.
[25]YU J,FARIN D,KRüGER C,et al. Improving person detection using synthetic training data[C]//2010 IEEE International Conference on Image Processing. Hong Kong:IEEE Computer Society,2010:3477-3480.
[26]ROS G,SELLART L,MATERZYNSKA J,et al. The synthia dataset:a large collection of synthetic images for semantic segmentation of urban scenes[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas:IEEE Computer Society,2016:3234-3243.
[27]ZHENG Z,ZHENG L,YANG Y. Unlabeled samples generated by gan improve the person re-identification baseline in vitro[C]//Proceedings of the IEEE International Conference on Computer Vision. Venice:IEEE Computer Society,2017:3754-3762.
[28]WEI Y,ZHAO Y,LU C,et al. Cross-modal retrieval with CNN visual features:a new baseline[J]. IEEE transactions on cybernetics,2017,47(2):449-460.
[29]GRAVES A,FERNáNDEZ S,GOMEZ F,et al. Connectionist temporal classification:labelling unsegmented sequence data with recurrent neural networks[C]//Proceedings of the 23rd International Conference on Machine Learning. Pittsburgh:ACM,2006:369-376.
[30]ZEILER M D. ADADELTA:an adaptive learning rate method[EB/OL]. [2019-06-01]. https://arxiv.org/abs/1212.5701.
[31]ABADI M,AGARWAL A,BARHAM P,et al. Tensorflow:large-scale machine learning on heterogeneous distributed systems[EB/OL]. [2019-06-01]. https://www.arxiv-vanity.com/papers/1603.04467/.
[32]LIU R,LI M. EasyPR[EB/OL]. [2019-06-01]. https://github.com/liuruoze/EasyPR.
[33]IM D J,KIM C D,JIANG H,et al. Generating images with recurrent adversarial networks[EB/OL]. [2019-06-01]. https://arxiv.org/abs/1602.05110.
[34]HINTON G,VINYALS O,DEAN J. Distilling the knowledge in a neural network[EB/OL]. [2019-06-01]. https://arxiv.org/abs/1503.02531.
[35]PEREYRA G,TUCKER G,CHOROWSKI J,et al. Regularizing neural networks by penalizing confident output distributions[EB/OL]. [2019-06-01]. https://arxiv.org/abs/1701.06548.
[36]SIFRE L,MALLAT S. Rigid-motion scattering for image classification[D]. Palaiseau:Ecole Polytechnique CMAP,2014.
[37]HOWARD A G,ZHU M,CHEN B,et al. Mobilenets:efficient convolutional neural networks for mobile vision applications[EB/OL]. [2019-06-01]. https://arxiv.org/abs/1704.04861.