[1] DAS D,DAS A K,PAL A R,et al. Meta-heuristic algorithms-tuned elman vs. jordan recurrent neural networks for modeling of electron beam welding process[J]. Neural processing letters,2021,53(2):1647-1663.
[2]BRUNO J H,JARVIS E D,LIBERMAN M,et al. Birdsong learning and culture:analogies with human spoken language[J]. Annual review of linguistics,2021,7(1):89-97.
[3]BOER B D,THOMPSON B,RAVIGNANI A,et al. Evolutionary dynamics do not motivate a single-mutant theory of human language[J]. Scientific reports,2020,10(1):22-31.
[4]MUKHERJEA A,ALI S,SMITH J A. A human rights perspective on palliative care:unraveling disparities and determinants among asian american populations[J]. Topics in language disorders,2020,40(3):278-296.
[5]LI K,PAN W,LI Y,et al. A method to detect sleep apnea based on deep neural network and hidden Markov model using single-lead ECG signal[J]. Neurocomputing,2018,294(6):94-101.
[6]张旭东,黄宇方,杜家浩,等. 基于离散型隐马尔可夫模型的股票价格预测[J]. 浙江工业大学学报,2020,48(2):148-153.
[7]OUISAADANE A,SAFI S. A comparative study for Arabic speech recognition system in noisy environments[J]. International journal of speech technology,2021,11(3):1-10.
[8]PIKHART M. Human-computer interaction in foreign language learning applications:applied linguistics viewpoint of mobile learning[J]. Procedia computer science,2021,184:92-98.
[9]YAO Q,UBALE R,LANGE P,et al. Spoken language understanding of human-machine conversations for language learning applications[J]. Journal of signal processing systems,2020,92(3):78-89.
[10]HINTON G E. A practical guide to training restricted boltzmann machines[J]. Momentum,2012,9(1):599-619.
[11]DEWANGAN S,ALVA S,JOSHI N,et al. Experience of neural machine translation between Indian languages[J]. Machine translation,2021,35(1):71-99.
[12]SHTERIONOV D,SUPERBO R,NAGLE P,et al. Human versus automatic quality uation of NMT and PBSMT[J]. Machine translation,2018,32(3):217-235.
[13]HOCHREITER S,SCHMIDHUBER J. Long short-term memory[J]. Neural computation,1997,9(8):1735-1743.
[14]GRAVES A,JAITLY N. Towards end-to-end speech recognition with recurrent neural networks[C]//International Conference on Machine Learning,Beijing,2014:1764-1772.
[15]DENG L,YU D. Deep learning for signal and information processing[J]. Now publishers,2013,12(8):218-227.
[16]WU S,LI G,DENG L,et al. Li-norm batch normalization for efficient training of deep neural networks[J]. IEET transations on neural and learning systems,2018,30(7):2043-2051.
[17]MAAS A L,QI P,XIE Z,et al. Building DNN acoustic models for large vocabulary speech recognition[J]. Computer speech & language,2014,41(C):195-213.
[18]CUI X,ZHANG W,FINKLER U,et al. Distributed training of deep neural network acoustic models for automatic speech recognition:a comparison of current training strategies[J]. IEEE signal processing magazine,2020,37(3):39-49.
[19]TAI K S,SOCHER R,MANNING C D. Improved semantic representations from tree-structured long short-term memory networks[J]. Computer science,2015,5(1):36-41.
[20]XU Y,DU J,DAI L R,et al. An experimental study on speech enhancement based on deep neural networks[J]. IEEE signal processing letters,2013,21(1):65-68.
[21]SALMELA L,TSIPINAKIS N,FOI A,et al. Predicting ultrafast nonlinear dynamics in fibre optics with a recurrent neural network[J]. Nature machine intelligence,2021,12(8):1-11.
[22]POLIAK A,RASTOGI P,MARTIN M P,et al. Efficient,compositional,order-sensitive n-gram embeddings[C]//Conference of the European Chapter of the Association for Computational Linguistics,London,2017:503-508.
[23]黄光许,田垚,康健,等. 低资源条件下基于 i-vector 特征的LSTM递归神经网络语音识别系统[J]. 计算机应用研究,2017,34(2):392-396.
[24]舒帆,屈丹,张文林,等. 采用长短时记忆网络的低 资源语音识别方法[J]. 西安交通大学学报,2017,51(10):120-127.