|Table of Contents|

Research on Machine Translation Method Based on Cyclic Generation Countermeasure Network(PDF)

《南京师大学报(自然科学版)》[ISSN:1001-4616/CN:32-1239/N]

Issue:
2022年01期
Page:
104-109
Research Field:
·计算机科学与技术·
Publishing date:

Info

Title:
Research on Machine Translation Method Based on Cyclic Generation Countermeasure Network
Author(s):
Xia Jun1Zhou Xiangzhen2Sui Dong3
(1.School of Foreign Languages,Qiannan Normal University for Nationalities,Duyun 558000,China)(2.Faculty Information Science and Technology,National University of Malaysia,Selangor 43600,Malaysia)(3.School of Electrical and Information Engineering,Beijing University of Civil Engineering and Architecture,Beijing 102406,China)
Keywords:
speech recognitionlanguage translationcyclic countermeasure networklong-short memory module
PACS:
TP391
DOI:
10.3969/j.issn.1001-4616.2022.01.015
Abstract:
In recent years,intelligent language processing has been widely used in language learning. However,due to the difficulty of network model optimization and the accuracy deviation of its labeled data,compared with most previous systems using discriminant model combined with HMM hybrid model for acoustic model training,This paper proposes a machine translation method based on cyclic generation countermeasure network. This method mainly combines generation countermeasure network to train machine translation model. Firstly,a speech is input into the neural machine translation module for discrete pre transformation to obtain MFCC features; Then,the preprocessed speech is input to the feature extraction module,and the speech features are extracted circularly combined with the long-term and short-term memory network; Finally,the speech output from the network model is compared with the artificially translated speech,and whether the speech features output from the network model match the artificially translated speech is judged. If not,the network is optimized. The experimental results show that our network is significantly improved compared with the traditional Gaussian kernel mixture model. This method has achieved excellent results in CSDN password set,rockyou password set,Tianya password set and Yahoo password set,and the word error rate in Yahoo password set is reduced to 19.5%.

References:

[1] DAS D,DAS A K,PAL A R,et al. Meta-heuristic algorithms-tuned elman vs. jordan recurrent neural networks for modeling of electron beam welding process[J]. Neural processing letters,2021,53(2):1647-1663.
[2]BRUNO J H,JARVIS E D,LIBERMAN M,et al. Birdsong learning and culture:analogies with human spoken language[J]. Annual review of linguistics,2021,7(1):89-97.
[3]BOER B D,THOMPSON B,RAVIGNANI A,et al. Evolutionary dynamics do not motivate a single-mutant theory of human language[J]. Scientific reports,2020,10(1):22-31.
[4]MUKHERJEA A,ALI S,SMITH J A. A human rights perspective on palliative care:unraveling disparities and determinants among asian american populations[J]. Topics in language disorders,2020,40(3):278-296.
[5]LI K,PAN W,LI Y,et al. A method to detect sleep apnea based on deep neural network and hidden Markov model using single-lead ECG signal[J]. Neurocomputing,2018,294(6):94-101.
[6]张旭东,黄宇方,杜家浩,等. 基于离散型隐马尔可夫模型的股票价格预测[J]. 浙江工业大学学报,2020,48(2):148-153.
[7]OUISAADANE A,SAFI S. A comparative study for Arabic speech recognition system in noisy environments[J]. International journal of speech technology,2021,11(3):1-10.
[8]PIKHART M. Human-computer interaction in foreign language learning applications:applied linguistics viewpoint of mobile learning[J]. Procedia computer science,2021,184:92-98.
[9]YAO Q,UBALE R,LANGE P,et al. Spoken language understanding of human-machine conversations for language learning applications[J]. Journal of signal processing systems,2020,92(3):78-89.
[10]HINTON G E. A practical guide to training restricted boltzmann machines[J]. Momentum,2012,9(1):599-619.
[11]DEWANGAN S,ALVA S,JOSHI N,et al. Experience of neural machine translation between Indian languages[J]. Machine translation,2021,35(1):71-99.
[12]SHTERIONOV D,SUPERBO R,NAGLE P,et al. Human versus automatic quality uation of NMT and PBSMT[J]. Machine translation,2018,32(3):217-235.
[13]HOCHREITER S,SCHMIDHUBER J. Long short-term memory[J]. Neural computation,1997,9(8):1735-1743.
[14]GRAVES A,JAITLY N. Towards end-to-end speech recognition with recurrent neural networks[C]//International Conference on Machine Learning,Beijing,2014:1764-1772.
[15]DENG L,YU D. Deep learning for signal and information processing[J]. Now publishers,2013,12(8):218-227.
[16]WU S,LI G,DENG L,et al. Li-norm batch normalization for efficient training of deep neural networks[J]. IEET transations on neural and learning systems,2018,30(7):2043-2051.
[17]MAAS A L,QI P,XIE Z,et al. Building DNN acoustic models for large vocabulary speech recognition[J]. Computer speech & language,2014,41(C):195-213.
[18]CUI X,ZHANG W,FINKLER U,et al. Distributed training of deep neural network acoustic models for automatic speech recognition:a comparison of current training strategies[J]. IEEE signal processing magazine,2020,37(3):39-49.
[19]TAI K S,SOCHER R,MANNING C D. Improved semantic representations from tree-structured long short-term memory networks[J]. Computer science,2015,5(1):36-41.
[20]XU Y,DU J,DAI L R,et al. An experimental study on speech enhancement based on deep neural networks[J]. IEEE signal processing letters,2013,21(1):65-68.
[21]SALMELA L,TSIPINAKIS N,FOI A,et al. Predicting ultrafast nonlinear dynamics in fibre optics with a recurrent neural network[J]. Nature machine intelligence,2021,12(8):1-11.
[22]POLIAK A,RASTOGI P,MARTIN M P,et al. Efficient,compositional,order-sensitive n-gram embeddings[C]//Conference of the European Chapter of the Association for Computational Linguistics,London,2017:503-508.
[23]黄光许,田垚,康健,等. 低资源条件下基于 i-vector 特征的LSTM递归神经网络语音识别系统[J]. 计算机应用研究,2017,34(2):392-396.
[24]舒帆,屈丹,张文林,等. 采用长短时记忆网络的低 资源语音识别方法[J]. 西安交通大学学报,2017,51(10):120-127.

Memo

Memo:
-
Last Update: 1900-01-01