Skip to content

Latest commit

 

History

History
245 lines (121 loc) · 23.1 KB

Attention-mechanisms-paper.md

File metadata and controls

245 lines (121 loc) · 23.1 KB

注意力机制在自然语言处理方面的文章笔记

  • 个人笔记,记载极其不全面,欢迎补充
  • 2018年以后文章
  • AAAI/IJCAI/ACL/EMNLP等会议

[2018]

  1. Hierarchical Attention Flow for Multiple-Choice Reading Comprehension Haichao Zhu, Furu Wei, Bing Qin, Ting Liu AAAI, 2018.

  2. Dual Attention Network for Product Compatibility and Function Satisfiability Analysis Hu Xu,Sihong Xie,Lei Shu,Philip S. Yu, AAAI, 2018.

  3. Improving Neural Fine-Grained Entity Typing with Knowledge Attention Ji Xin, Yankai Lin, Zhiyuan Liu, Maosong Sun, AAAI, 2018.

  4. Improving Review Representations with User Attention and Product Attention for Sentiment Classification Zhen Wu, Xin-Yu Dai, Cunyan Yin, Shujian Huang, Jiajun Chen, AAAI, 2018.

  5. Mention and Entity Description Co-Attention for Entity Disambiguation Feng Nie, Yunbo Cao, Jinpeng Wang, Chin-Yew Lin, Rong Pan, AAAI,2018.

  6. Hierarchical Attention Transfer Network for Cross-Domain Sentiment Classification Zheng Li, Ying Wei, Yu Zhang, Qiang Yang, AAAI, 2018.

  7. A Question-Focused Multi-Factor Attention Network for Question Answering Souvik Kundu, Hwee Tou Ng, AAAI, 2018.

  8. RNN-Based Sequence-Preserved Attention for Dependency Parsing Yi Zhou, Junying Zhou, Lu Liu, Jiangtao Feng, Haoyuan Peng, Xiaoqing Zheng, AAAI, 2018.

  9. Adaptive Co-Attention Network for Named Entity Recognition in Tweets Qi Zhang, Jinlan Fu, Xiaoyu Liu, Xuanjing Huang, AAAI, 2018.

  10. Chinese LIWC Lexicon Expansion via Hierarchical Classification of Word Embeddings with Sememe Attention Xiangkai Zeng, Cheng Yang, Cunchao Tu, Zhiyuan Liu, Maosong Sun, AAAI, 2018.

  11. Multi-Attention Recurrent Network for Human Communication Comprehension Amir Zadeh,Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, Louis-Philippe Morency, AAAI, 2018.

  12. Hierarchical Recurrent Attention Network for Response Generation Chen Xing, Yu Wu, Wei Wu, Yalou Huang, Ming Zhou, AAAI, 2018.

  13. Word Attention for Sequence to Sequence Text Understanding Lijun Wu, Fei Tian, Li Zhao, Jianhuang Lai, Tie-Yan Liu, AAAI, 2018.

  14. DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding Tao Shen, Jing Jiang, Tianyi Zhou, Shirui Pan, Guodong Long, Chengqi Zhang, AAAI, 2018.

  15. Attention-Based Belief or Disbelief Feature Extraction for Dependency Parsing Haoyuan Peng, Lu Liu, Yi Zhou, Junying Zhou, Xiaoqing Zheng, AAAI, 2018.

  16. An Unsupervised Model with Attention Autoencoders for Question Retrieval Minghua Zhang, Yunfang Wu, AAAI, 2018.

  17. Deep Semantic Role Labeling with Self-Attention Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, Xiaodong Shi, AAAI, 2018.

  18. Event Detection via Gated Multilingual Attention Mechanism Jian Liu, Yubo Chen, Kang Liu, Jun Zhao, AAAI, 2018.

  19. Neural Knowledge Acquisition via Mutual Attention between Knowledge Graph and Text Xu Han, Zhiyuan Liu, Maosong Sun, AAAI, 2018.

  20. Syntax-Directed Attention for Neural Machine Translation Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao, AAAI, 2018.

  21. Attend and Diagnose: Clinical Time Series Analysis Using Attention Models Huan Song, Deepta Rajan, Jayaraman J. Thiagarajan, Andreas Spanias, AAAI, 2018.

  22. Attention-Based Transactional Context Embedding for Next-Item Recommendation Shoujin Wang, Liang Hu, Longbing Cao, Xiaoshui Huang, Defu Lian, Wei Liu, AAAI, 2018.

  23. Attention-via-Attention Neural Machine Translation Shenjian Zhao, Zhihua Zhang, AAAI, 2018.

  24. Visual Attention Model for Name Tagging in Multimodal Social Media Di Lu, Leonardo Neves, Vitor Carvalho, Ning Zhang, Heng Ji, ACL, 2018.

  25. Attention Focusing for Neural Machine Translation by Bridging Source and Target Embeddings Shaohui Kuang, Junhui Li, António Branco, Weihua Luo, Deyi Xiong, ACL, 2018.

  26. Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, Hua Wu,ACL, 2018.

  27. Cold-Start Aware User and Product Attention for Sentiment Classification Reinald Kim Amplayo, Jihyeok Kim, Sua Sung, Seung-won Hwang, ACL, 2018.

  28. Multi-Granularity Hierarchical Attention Fusion Networks for Reading Comprehension and Question Answering Wei Wang, Ming Yan, Chen Wu, ACL, 2018.

  29. Multi-Input Attention for Unsupervised OCR Correction Rui Dong, David Smith, ACL, 2018.

  30. How Much Attention Do You Need? A Granular Analysis of Neural Machine Translation Architectures Tobias Domhan, ACL, 2018.

  31. Accelerating Neural Transformer via an Average Attention Network Biao Zhang, Deyi Xiong, jinsong su, ACL, 2018.

  32. Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, Ivan Marsic, ACL,2018.

  33. Document Modeling with External Attention for Sentence Extraction Shashi Narayan, Ronald Cardenas, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata, Jiangsheng Yu, Yi Chang, ACL, 2018.

  34. Efficient Large-Scale Neural Domain Classification with Personalized Attention Young-Bum Kim, Dongchan Kim, Anjishnu Kumar, Ruhi Sarikaya, ACL, 2018.

  35. Neural Coreference Resolution with Deep Biaffine Attention by Joint Mention Detection and Mention ClusteringRui Zhang, Cicero Nogueira dos Santos, Michihiro Yasunaga, Bing Xiang, Dragomir Radev, ACL, 2018.

  36. Document Embedding Enhanced Event Detection with Hierarchical and Supervised Attention Yue Zhao, Xiaolong Jin, Yuanzhuo Wang, Xueqi Cheng, ACL, 2018.

  37. Sparse and Constrained Attention for Neural Machine Translation Chaitanya Malaviya, Pedro Ferreira, André F. T. Martins, ACL, 2018.

  38. Cross-Target Stance Classification with Self-Attention Networks Chang Xu, Cecile Paris, Surya Nepal, Ross Sparks, ACL, 2018.

  39. A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification Zeyang Lei, Yujiu Yang, Min Yang, Yi Liu, ACL, 2018.

  40. Improving Slot Filling in Spoken Language Understanding with Joint Pointer and Attention Lin Zhao, Zhe Feng, ACL, 2018.

  41. Adversarial Transfer Learning for Chinese Named Entity Recognition with Self-Attention Mechanism Nafise Sadat Moosavi, Michael Strube, EMNLP, 2018.

  42. Surprisingly Easy Hard-Attention for Sequence to Sequence Learning Shiv Shankar, Siddhant Garg, Sunita Sarawagi, EMNLP, 2018.

  43. Hybrid Neural Attention for Agreement/Disagreement Inference in Online Debates Di Chen, Jiachen Du, Lidong Bing, Ruifeng Xu, EMNLP, 2018.

  44. A Hierarchical Neural Attention-based Text Classifier Koustuv Sinha, Yue Dong, Jackie Chi Kit Cheung, Derek Ruths, EMNLP, 2018.

  45. Supervised Domain Enablement Attention for Personalized Domain Classification Joo-Kyung Kim, Young-Bum Kim, EMNLP, 2018.

  46. Attention-via-Attention Neural Machine Translation Ningyu Zhang, Shumin Deng, Zhanling Sun, Xi Chen, Wei Zhang, Huajun Chen, EMNLP, 2018.

  47. Improving Multi-label Emotion Classification via Sentiment Classification with Dual Attention Transfer Network Jianfei Yu, Luís Marujo, Jing Jiang, Pradeep Karuturi, William Brendel, EMNLP, 2018.

  48. Improving Large-Scale Fact-Checking using Decomposable Attention Models and Lexical Tagging Nayeon Lee, Chien-Sheng Wu, Pascale Fung, EMNLP, 2018.

  49. Jointly Multiple Events Extraction via Attention-based Graph Information Aggregation Xiao Liu, Zhunchen Luo, Heyan Huang, EMNLP, 2018.

  50. Collective Event Detection via a Hierarchical and Bias Tagging Networks with Gated Multi-level Attention Mechanisms ubo Chen, Hang Yang, Kang Liu, Jun Zhao, Yantao Jia,EMNLP, 2018.

  51. Leveraging Gloss Knowledge in Neural Word Sense Disambiguation by Hierarchical Co-Attention Fuli Luo, Tianyu Liu, Zexue He, Qiaolin Xia, Zhifang Sui, Baobao Chang, EMNLP, 2018.

  52. Neural Related Work Summarization with a Joint Context-driven Attention Mechanism Yongzhen Wang, Xiaozhong Liu, Zheng Gao, EMNLP, 2018.

  53. Deriving Machine Attention from Human Rationales Yujia Bao, Shiyu Chang, Mo Yu, Regina Barzilay, EMNLP, 2018.

  54. Attention-Guided Answer Distillation for Machine Reading Comprehensio Minghao Hu, Yuxing Peng, Furu Wei, Zhen Huang, Dongsheng Li, Nan Yang, Ming Zhou, EMNLP, 2018.

  55. Multi-Level Structured Self-Attentions for Distantly Supervised Relation Extraction Jinhua Du, Jingguang Han, Andy Way, Dadong Wan, EMNLP, 2018.

  56. Hierarchical Relation Extraction with Coarse-to-Fine Grained Attention Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, Peng Li, EMNLP, 2018.

  57. WECA:A WordNet-Encoded Collocation-Attention Network for Homographic Pun Recognition Yufeng Diao, Hongfei Lin, Di Wu, Liang Yang, Kan Xu, Zhihao Yang, Jian Wang, Shaowu Zhang, Bo Xu, Dongyu Zhang, EMNLP, 2018.

  58. Multi-Head Attention with Disagreement Regularization Jian Li, Zhaopeng Tu, Baosong Yang, Michael R. Lyu, Tong Zhang, EMNLP, 2018.

  59. Document-Level Neural Machine Translation with Hierarchical Attention Networks Lesly Miculicich Werlen, Dhananjay Ram, Nikolaos Pappas, James Henderson, EMNLP, 2018.

  60. Learning When to Concentrate or Divert Attention: Self-Adaptive Attention Temperature for Neural Machine Translation Junyang Lin, Xu Sun, Xuancheng Ren, Muyu Li, Qi Su,EMNLP, 2018.

  61. Training Deeper Neural Machine Translation Models with Transparent Attention Ankur Bapna, Mia Xu Chen, Orhan Firat, Yuan Cao, Yonghui Wu, EMNLP, 2018.

  62. A Genre-Aware Attention Model to Improve the Likability Prediction of Books Suraj Maharjan, Manuel Montes-y-Gómez, Fabio A. González, Thamar Solorio, EMNLP, 2018.

  63. Multi-grained Attention Network for Aspect-Level Sentiment Classification Feifan Fan, Yansong Feng, Dongyan Zhao, EMNLP, 2018.

  64. Attentive Gated Lexicon Reader with Contrastive Contextual Co-Attention for Sentiment ClassificationYi Tay, Anh Tuan Luu, Siu Cheung Hui, Jian Su,EMNLP,2018.

  65. Contextual Inter-modal Attention for Multi-modal Sentiment Analysis Deepanway Ghosal, Md. Shad Akhtar, Dushyant Chauhan, Soujanya Poria, Asif Ekbal, Pushpak Bhattacharyya,EMNLP, 2018.

  66. A Visual Attention Grounding Neural Model for Multimodal Machine Translation Mingyang Zhou, Runxiang Cheng, Yong Jae Lee, Zhou Yu, EMNLP, 2018.

  67. Phrase-level Self-Attention Networks for Universal Sentence Encoding Wei Wu, Houfeng Wang, Tianyu Liu, Shuming Ma, EMNLP, 2018.

  68. Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, Qifa Ke, EMNLP, 2018.

  69. Abstractive Text-Image Summarization Using Multi-Modal Attentional Hierarchical RNN Jingqiang Chen, Hai Zhuge, EMNLP, 2018.

  70. Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures Gongbo Tang, Mathias Müller, Annette Rios, Rico Sennrich, EMNLP, 2018.

  71. Hard Non-Monotonic Attention for Character-Level Transduction Shijie Wu, Pamela Shapiro, Ryan Cotterell, EMNLP, 2018.

  72. Modeling Localness for Self-Attention Networks Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, Tong Zhang, EMNLP, 2018.

  73. Co-Stack Residual Affinity Networks with Multi-level Attention Refinement for Matching Text Sequences Yi Tay, Anh Tuan Luu, Siu Cheung Hui, EMNLP, 2018.

  74. Learning Universal Sentence Representations with Mean-Max Attention Autoencoder Minghua Zhang, Yunfang Wu, Weikang Li, Wei Li, EMNLP, 2018.

  75. A Co-attention Neural Network Model for Emotion Cause Analysis with Emotional Context Awareness Xiangju Li, Kaisong Song, Shi Feng, Daling Wang, Yifei Zhang, EMNLP, 2018.

  76. Interpretable Emoji Prediction via Label-Wise Attention LSTMs Francesco Barbieri, Luis Espinosa Anke, José Camacho-Collados, Steven Schockaert, Horacio Saggion, EMNLP, 2018.

  77. Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference Reza Ghaeini, Xiaoli Z. Fern, Prasad Tadepalli, EMNLP, 2018.

  78. Linguistically-Informed Self-Attention for Semantic Role Labeling Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, Andrew McCallum, EMNLP, 2018.

  79. Discourse-Aware Neural Rewards for Coherent Text Generation Antoine Bosselut, Asli Çelikyilmaz, Xiaodong He, Jianfeng Gao, Po-Sen Huang, Yejin Choi, NAACL, 2018.

  80. A MIXED HIERARCHICAL ATTENTION BASED ENCODER-DECODER APPROACH FOR STANDARD TABLE SUMMARIZATION Parag Jain, Anirban Laha, Karthik Sankaranarayanan, Preksha Nema, Mitesh M. Khapra, Shreyas Shetty, NAACL, 2018.

  81. COMBINING CHARACTER AND WORD INFORMATION IN NEURAL MACHINE TRANSLATION USING A MULTI-LEVEL ATTENTION Huadong Chen, Shujian Huang, David Chiang, Xinyu Dai, Jiajun Chen,NAACL, 2018.

  82. Generating Descriptions from Structured Data Using a Bifocal Attention Mechanism and Gated Orthogonalization Preksha Nema, Shreyas Shetty, Parag Jain, Anirban Laha, Karthik Sankaranarayanan, Mitesh M. Khapra, NAACL, 2018.

  83. Generating Topic-Oriented Summaries Using Neural Attention Kundan Krishna, Balaji Vasan Srinivasan, NAACL, 2018.

  84. Higher-Order Syntactic Attention Network for Longer Sentence Compression Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Masaaki Nagata, NAACL, 2018.

  85. How Time Matters: Learning Time-Decay Attention for Contextual Spoken Language Understanding in Dialogues Shang-Yu Su, Pei-Chieh Yuan, Yun-Nung Chen, NAACL, 2018.

  86. Knowledge-Enriched Two-Layered Attention Network for Sentiment Analysis Abhishek Kumar, Daisuke Kawahara, Sadao Kurohashi, NAACL, 2018.

  87. Self-Attention with Relative Position Representations Peter Shaw, Jakob Uszkoreit, Ashish Vaswani, NAACL, 2018.

  88. Target Foresight Based Attention for Neural Machine Translation Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, Max Meng, NAACL, 2018.

  89. Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning Xin Wang, Yuan-Fang Wang, William Yang Wang, NAACL, 2018.

  90. Query and Output: Generating Words by Querying Distributed Word Representations for Paraphrase Generation Shuming Ma, Xu Sun, Wei Li, Sujian Li, Wenjie Li, Xuancheng Ren,NAACL, 2018.

  91. Medical Concept Embedding with Time-Aware Attention Xiangrui Cai, Jinyang Gao, Kee Yuan Ngiam, Beng Chin Ooi, Ying Zhang, Xiaojie Yuan, IJCAI, 2018.

  92. Attention-Fused Deep Matching Network for Natural Language Inference Chaoqun Duan, Lei Cui, Xinchi Chen, Furu Wei, Conghui Zhu, Tiejun Zhao, IJCAI, 2018.

  93. Multi-modal Sentence Summarization with Modality Attention and Image Filtering Haoran Li, Junnan Zhu, Tianshang Liu, Jiajun Zhang, Chengqing Zong, IJCAI, 2018.

  94. Code Completion with Neural Attention and Pointer Networks Jian Li, Yue Wang, Michael R. Lyu, Irwin King,IJCAI,2018.

  95. Aspect Term Extraction with History Attention and Selective Transformation Xin Li, Lidong Bing, Piji Li, Wai Lam, Zhimou Yang, IJCAI, 2018.

  96. Feature Enhancement in Attention for Visual Question Answering Yuetan Lin, Zhangyang Pang, Donghui Wang, Yueting Zhuang, IJCAI, 2018.

  97. Beyond Polarity: Interpretable Financial Sentiment Analysis with Hierarchical Query-driven Attention Ling Luo, Xiang Ao, Feiyang Pan, Jin Wang, Tong Zhao, Ningzi Yu, Qing He, IJCAI, 2018.

  98. Translating Embeddings for Knowledge Graph Completion with Relation Attention Mechanism Wei Qian, Cong Fu, Yu Zhu, Deng Cai, Xiaofei He, IJCAI, 2018.

  99. Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, Chengqi Zhang, IJCAI, 2018.

  100. Listen, Think and Listen Again: Capturing Top-down Auditory Attention for Speaker-independent Speech Separation Jing Shi, Jiaming Xu, Guangcan Liu, Bo Xu, IJCAI, 2018.

  101. Multiway Attention Networks for Modeling Sentence Pairs Chuanqi Tan, Furu Wei, Wenhui Wang, Weifeng Lv, Ming Zhou, IJCAI, 2018.

  102. Get The Point of My Utterance! Learning Towards Effective Responses with Multi-Head Attention Mechanism Chongyang Tao, Shen Gao, Mingyue Shang, Wei Wu, Dongyan Zhao, Rui Yan, IJCAI, 2018.

  103. Hermitian Co-Attention Networks for Text Matching in Asymmetrical Domains Yi Tay, Anh Tuan Luu, Siu Cheung Hui, IJCAI, 2018.

  104. Aspect Sentiment Classification with both Word-level and Clause-level Attention Networks Jingjing Wang, Jie Li, Shoushan Li, Yangyang Kang, Min Zhang, Luo Si, Guodong Zhou, IJCAI, 2018.

  105. Densely Connected CNN with Multi-scale Feature Attention for Text Classification Shiyao Wang, Minlie Huang, Zhidong Deng, IJCAI, 2018.

  106. Same Representation, Different Attentions: Shareable Sentence Representation Learning from Multiple Tasks Renjie Zheng, Junkun Chen, Xipeng Qiu, IJCAI, 2018.

  107. Commonsense Knowledge Aware Conversation Generation with Graph Attention Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, Xiaoyan Zhu, IJCAI, 2018.

[2019]

  1. To Find Where You Talk: Temporal Sentence Localization in Video with Attention Based Location Regression Yitian Yuan, Tao Mei, Wenwu Zhu, AAAI, 2019.

  2. An Affect-Rich Neural Conversational Model with Biased Attention and Weighted Cross-Entropy Loss Peixiang Zhong, Di Wang, Chunyan Miao, AAAI, 2019.

  3. Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding Peifeng Wang, Jialong Han, Chenliang Li, Rong Pan, AAAI, 2019.

  4. Cross-relation Cross-bag Attention for Distantly-supervised Relation Extraction Yujin Yuan, Liyuan Liu, Siliang Tang, Zhongfei Zhang, Yueting Zhuang, Shiliang Pu, Fei Wu, Xiang Ren , AAAI, 2019.

  5. Deep Metric Learning by Online Soft Mining and Class-Aware Attention Xinshao Wang, Yang Hua, Elyor Kodirov, Guosheng Hu, Neil M. Robertson, AAAI, 2019.

  6. Multi-Task Learning with Multi-View Attention for Answer Selection and Knowledge Base Question Answering Yang Deng, Yuexiang Xie, Yaliang Li, Min Yang, Nan Du, Wei Fan, Kai Lei, Ying Shen , AAAI, 2019.

  7. Character-Level Language Modeling with Deeper Self-Attention Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, Llion Jones , AAAI, 2019.

  8. Convolutional Spatial Attention Model for Reading Comprehension with Multiple-Choice Questions Zhipeng Chen, Yiming Cui, Wentao Ma, Shijin Wang, Guoping Hu, AAAI, 2019.