参考文献/References:
[1] GONG Y,KE Q,ISARD M,et al. A multi-view embedding space for modeling internet images,tags and their semantics[J]. International journal of computer vision,2014,106(2):210-233.
[2]PORIA S,CAMBRIA E,BAJPAI R,et al. A review of affective computing:from unimodal analysis to multimodal fusion[J]. Information fusion,2017,37:98-125.
[3]CAMBRIA E. Affective computing and sentiment analysis[J]. IEEE intelligent systems,2016,31(2):102-107.
[4]CARRERAS X,MARQUEZ L. Boosting trees for anti-spam email filtering[C]//Proceeding of the 4th International Conference on Recent Advances in Natural Language Processing. Tzigov Chark,Bulgaria,2001.
[5]BLEI D M,NG A Y,JORDAN M I. Latent dirichlet allocation[J]. Journal of machine learning research,2003,3:993-1022.
[6]黄泽明. 基于主题模型的学术论文推荐系统研究[D]. 大连:大连海事大学,2013.
[7]王李冬,魏宝刚,袁杰. 基于概率主题模型的文档聚类[J]. 电子学报,2012,40(11):2346-2350.
[8]邱云飞,郭弥纶,邵良杉. 基于主题树的微博突发话题检测[J]. 计算机应用,2014,34(8):2332-2335.
[9]HOCHREITER S,SCHMIDHUBER J. Long short-term memory[J]. Neural computation,1997,9(8):1735-1780.
[10]SUTSKEVER I,VINYALS O,LE Q V. Sequence to sequence learning with neural networks[C]//Proceedings of the 27th International Conference on Neural Information Processing systems. Montreal,Canada,2014.
[11]CHO K,VAN MERRI?NBOER B,GULCEHRE C,et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Doha,Qatar,2014.
[12]VINYALS O,LE Q. A neural conversational model[J]. arXiv preprint arXiv:1506.05869,2015.
[13]RUSH A M,CHOPRA S,WESTON J. A neural attention model for abstractive sentence summarization[J]. arXiv preprint arXiv:1509.00685,2015.
[14]LUONG M T,LE Q V,SUTSKEVER I,et al. Multi-task sequence to sequence learning[C]//Proceedings of ICLR. San Juan,Puerto Rico,2015.
[15]BAHDANAU D,CHO K,BENGIO Y. Neural machine translation by jointly learning to align and translate[J]. arXiv preprint arXiv:1409.0473,2014.
[16]ZHANG M L,ZHOU Z H. A review on multi-label learning algorithms[J]. IEEE transactions on knowledge and data engineering,2014,26(8):1819-1837.
[17]LUACES R ó,DíEZ P J,BARRANQUERO T J,et al. Binary relevance efficacy for multilabel classification[J]. Progress in artificial intelligence,2012,1(4):303-313.
[18]CHERMAN E A,MONARD M C,METZ J. Multi-label problem transformation methods:a case study[J]. CLEI electronic journal,2011,14(1):4-4.
[19]READ J,PFAHRINGER B,HOLMES G,et al. Classifier chains for multi-label classification[J]. Machine learning,2011,85(3):333.
[20]ZHANG M L,ZHOU Z H. ML-KNN:A lazy learning approach to multi-label learning[J]. Pattern recognition,2007,40(7):2038-2048.
[21]ELISSEEFF A,WESTON J. A kernel method for multi-labelled classification[C]//Proceedings of the 14th International Conference on Neural Information Proceesing Systems:Natural and Synthetic. Vancouver,British Columbia,Canada,2001.
[22]XU K,BA J,KIROS R,et al. Show,attend and tell:Neural image caption generation with visual attention[C]//Proceedings of the 32nd International Conference on Machine Learning. Lille,France,2015.
[23]LEE C Y,OSINDERO S. Recursive recurrent nets with attention modeling for ocr in the wild[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las vegas,2016.