Skip to content

Paper list for Reliable Machine Reading Comprehension, including topics related to robustness, prediction safety, and continual learning.

Notifications You must be signed in to change notification settings

thuiar/Reliable-MRC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

Reliable MRC

Paper list for Reliable Machine Reading Comprehension, including topics related to robustness, prediction safety, and continual learning.

Table of Contents

Machine Reading Comprehension Overview

Datasets

  • Hermann K M, Kociský T, Grefenstette E, et al. Teaching machines to read and comprehend. Proceedings of the 28th International Conference on Neural Information Processing Systems. 2015: 1693-1701. paper
  • Rajpurkar P, Zhang J, Lopyrev K, et al. Squad: 100, 000+ questions for machine comprehension of text. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2016: 2383-2392. paper
  • Dhingra B, Mazaitis K, Cohen W W. Quasar: Datasets for question answering by search and reading. CoRR, 2017, abs/1707.03904. paper
  • Yagcioglu S, Erdem A, Erdem E, et al. Recipeqa: A challenge dataset for multimodal compreension of cooking recipes. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 1358-1368. paper
  • Trischler A, Wang T, Yuan X, et al. Newsqa: A machine comprehension dataset. Proceedings of the 2nd Workshop on Representation Learning for NLP. 2017: 191-200. paper
  • Dunn M, Sagun L, Higgins M, et al. Searchqa: A new q&a dataset augmented with context from a search engine. CoRR, 2017, abs/1704.05179. paper
  • Lai G, Xie Q, Liu H, et al. RACE: large-scale reading comprehension dataset from examinations. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017: 785-794. paper
  • Ostermann S, Modi A, Roth M, et al. Mcscript: A novel dataset for assessing machine comprehension using script knowledge. Proceedings of the Eleventh International Conference on Language Resources and Evaluation. 2018. paper
  • Tapaswi M, Zhu Y, Stiefelhagen R, et al. Movieqa: Understanding stories in movies through question-answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 4631-4640. paper
  • Nguyen T, Rosenberg M, Song X, et al. MS MARCO: A human generated machine reading comprehension dataset. choice, 2016, 2640: 660. paper
  • Kociský T, Schwarz J, Blunsom P, et al. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 2018, 6: 317-328. paper
  • Zhang S, Liu X, Liu J, et al. Record: Bridging the gap between human and machine commonsense reading comprehension. CoRR, 2018, abs/1810.12885. paper
  • Yang Z, Qi P, Zhang S, et al. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 2369-2380. paper
  • Reddy S, Chen D, Manning C D. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 2019, 7: 249-266. paper

MRC Models based on Attention Mechanism

  • Mikolov T, Sutskever I, Chen K, et al. Distributed representations of words and phrases and their compositionality. Proceedings of the 26th International Conference on Neural Information Processing Systems. 2013: 3111-3119. paper
  • Pennington J, Socher R, Manning C D. Glove: Global vectors for word representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. 2014: 1532-1543. paper
  • Bojanowski P, Grave E, Joulin A, et al. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 2017, 5: 135-146. paper
  • Peters M E, Neumann M, Iyyer M, et al. Deep contextualized word representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. 2018: 2227-2237. paper
  • Seo M J, Kembhavi A, Farhadi A, et al. Bidirectional attention flow for machine comprehension. The 5th International Conference on Learning Representations. 2017. paper
  • Chen D, Fisch A, Weston J, et al. Reading wikipedia to answer open-domain questions. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017: 1870-1879. paper
  • Hermann K M, Kociský T, Grefenstette E, et al. Teaching machines to read and comprehend. Proceedings of the 28th International Conference on Neural Information Processing Systems. 2015: 1693-1701. paper
  • Chen D, Bolton J, Manning C D. A thorough examination of the cnn/daily mail reading comprehension task. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 2016: 2358-2367. paper
  • Kadlec R, Schmid M, Bajgar O, et al. Text understanding with the attention sum reader network. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 2016: 908-918. paper
  • Cui Y, Chen Z, Wei S, et al. Attention-over-attention neural networks for reading comprehension. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017: 593-602. paper
  • Wang W, Yang N, Wei F, et al. Gated self-matching networks for reading comprehension and question answering. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017: 189-198. paper
  • Yu A W, Dohan D, Luong M, et al. Qanet: Combining local convolution with global self-attention for reading comprehension. The 6th International Conference on Learning Repre sentations. 2018. paper
  • Zhang S, Zhao H, Wu Y, et al. DCMN+: dual co-matching network for multi-choice reading comprehension. Proceedings of the AAAI Conference on Artificial Intelligence. 2020: 9563-9570. paper
  • Chen Z, Cui Y, Ma W, et al. Convolutional spatial attention model for reading comprehension with multiple-choice questions. Proceedings of the AAAI Conference on Artificial Intelligence. 2019: 6276-6283. paper
  • Ran Q, Li P, Hu W, et al. Option comparison network for multiple-choice reading comprehension. CoRR, 2019, abs/1903.03033. paper
  • Wang W, Wu C, Yan M. Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018: 1705-1714. paper
  • Wang S, Jiang J. Machine comprehension using match-lstm and answer pointer. The 5th International Conference on Learning Representations. 2017. paper
  • Hu M, Wei F, Peng Y, et al. Read + verify: Machine reading comprehension with unanswerable questions. Proceedings of the AAAI Conference on Artificial Intelligence. 2019: 6529-6537. paper
  • Tan C, Wei F, Yang N, et al. S-net: From answer extraction to answer synthesis for machine reading comprehension. Proceedings of the AAAI Conference on Artificial Intelligence. 2018: 5940-5947. paper

MRC Models based on Pretrained Language Models

  • Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017: 5998-6008. paper
  • Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training. OpenAI, 2018. paper
  • Devlin J, Chang M, Lee K, et al. BERT: pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. 2019: 4171-4186. paper
  • Liu Y, Ott M, Goyal N, et al. Roberta: A robustly optimized BERT pretraining approach. CoRR, 2019, abs/1907.11692. paper
  • Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 2020, 21: 140:1-140:67. paper
  • Joshi M, Chen D, Liu Y, et al. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 2020, 8: 64-77. paper
  • Yang A, Wang Q, Liu J, et al. Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. Proceedings of the 57th Conference of the Association for Computational Linguistics. 2019: 2346-2357. paper
  • Ram O, Kirstain Y, Berant J, et al. Few-shot question answering by pretraining span selection. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. 2021: 3066-3079. paper
  • Jiao F, Guo Y, Niu Y, et al. REPT: bridging language models and machine reading comprehension via retrieval-based pre-training. Findings of the Association for Computational Linguistics: ACL 2021. 2021: 150-163. paper
  • Dua D, Wang Y, Dasigi P, et al. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. 2019: 2368-2378. paper
  • Ran Q, Lin Y, Li P, et al. Numnet: Machine reading comprehension with numerical reasoning. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. 2019: 2474-2484. paper
  • Zhou Y, Bao J, Duan C, et al. OPERA: operation-pivoted discrete reasoning over text. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics. 2022: 1655-1666. paper
  • Shakeri S, dos Santos C N, Zhu H, et al. End-to-end synthetic data generation for domain adaptation of question answering systems. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. 2020: 5445-5460. paper
  • Cao Y, Fang M, Yu B, et al. Unsupervised domain adaptation on reading comprehension. Proceedings of the AAAI Conference on Artificial Intelligence. 2020: 7480-7487. paper
  • Wang H, Yu D, Sun K, et al. Evidence sentence extraction for machine reading comprehension. Proceedings of the 23rd Conference on Computational Natural Language Learning. 2019: 696-707. paper
  • Chen N, Shou L, Gong M, et al. From good to best: Two-stage training for cross-lingual machine reading comprehension. Proceedings of the AAAI Conference on Artificial Intelligence. 2022: 10501-10508. paper
  • Wu L, Wu S, Zhang X, et al. Learning disentangled semantic representations for zero-shot cross-lingual transfer in multilingual machine reading comprehension. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022: 991-1000. paper
  • Liu J, Chen Y, Liu K, et al. Event extraction as machine reading comprehension. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. 2020: 1641-1651. paper
  • Yu M, Liu J, Chen Y, et al. Cross-domain slot filling as machine reading comprehension. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. 2021: 3992-3998. paper
  • Yang Y, Zhao H. Aspect-based sentiment analysis as machine reading comprehension. Proceedings of the 29th International Conference on Computational Linguistics. 2022: 2461-2471. paper
  • Khashabi D, Min S, Khot T, et al. Unifiedqa: Crossing format boundaries with a single QA system. Findings of the Association for Computational Linguistics: EMNLP 2020. 2020: 1896-1907. paper
  • Khashabi D, Kordi Y, Hajishirzi H. Unifiedqa-v2: Stronger generalization via broader crossformat training. CoRR, 2022, abs/2202.12359. paper
  • Wang J, Wang C, Qiu M, et al. KECP: knowledge enhanced contrastive prompting for few-shot extractive question answering. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 3152-3163. paper

Enhancing Robustness in Machine Reading Comprehension

Attack Method

  • Feng S, Wallace E, Grissom II A, et al. Pathologies of neural models make interpretations difficult. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018: 3719-3728. paper
  • MinS, ZhongV, SocherR, et al. Efficient and robust question answering from minimal context over documents. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018: 1725-1735. paper
  • Jiang Y, Bansal M. Avoiding reasoning shortcuts: Adversarial evaluation, training, and model development for multi-hop QA. Proceedings of the 57th Conference of the Association for Computational Linguistics. 2019: 2726-2736. paper
  • Lai Y, Zhang C, Feng Y, et al. Why machine reading comprehension models learn shortcuts?. Findings of the Association for Computational Linguistics: ACL 2021. 2021: 989-1002. paper
  • Zhou X, Luo S, Wu Y. Co-attention hierarchical network: Generating coherent long distractors for reading comprehension. Proceedings of the AAAI Conference on Artificial Intelligence. 2020: 9725-9732. paper
  • Gao Y, Bing L, Li P, et al. Generating distractors for reading comprehension questions from real examinations. Proceedings of the AAAI Conference on Artificial Intelligence. 2019: 6423-6430. paper
  • Welbl J, Liu NF, Gardner M. Crowdsourcing multiple choice science questions. Proceedings of the 3rd Workshop on Noisy User-generated Text. 2017: 94-106. paper
  • Chen C, Liou H, Chang J S. FAST - an automatic generation system for grammar tests. Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics. 2006. paper
  • Stasaski K, Hearst M A. Multiple choice question generation utilizing an ontology. Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications. 2017: 303-312. paper
  • Jia R, Liang P. Adversarial examples for evaluating reading comprehension systems. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017: 2021-2031. paper
  • Wang Y, Bansal M. Robust machine comprehension models via adversarial training. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. 2018: 575-581. paper
  • Gan W C, Ng H T. Improving the robustness of question answering systems to question paraphrasing. Proceedings of the 57th Conference of the Association for Computational Linguistics. 2019: 6065-6075. paper
  • Wu W, Arendt D, Volkova S. Evaluating neural machine comprehension model robustness to noisy inputs and adversarial attacks. CoRR, 2020, abs/2005.00190. paper
  • Wallace E, Feng S, Kandpal N, et al. Universal adversarial triggers for attacking and analyzing NLP. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. 2019: 2153-2162. paper
  • Ribeiro MT, Wu T, Guestrin C, e tal. Beyond accuracy: Behavior altesting of NLP models with checklist. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 4902-4912. paper

Defense Method

  • Jia R, Liang P. Adversarial examples for evaluating reading comprehension systems. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017: 2021-2031. paper
  • Wallace E, Feng S, Kandpal N, et al. Universal adversarial triggers for attacking and analyzing NLP. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. 2019: 2153-2162. paper
  • Wang Y, Bansal M. Robust machine comprehension models via adversarial training. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. 2018: 575-581. paper
  • Gan W C, Ng H T. Improving the robustness of question answering systems to question paraphrasing. Proceedings of the 57th Conference of the Association for Computational Linguistics. 2019: 6065-6075. paper
  • Yang Z, Cui Y, Che W, et al. Improving machine reading comprehension via adversarial training. CoRR, 2019, abs/1911.03614. paper
  • Liu K, Liu X, Yang A, et al. A robust adversarial training approach to machine reading comprehension. Proceedings of the AAAI Conference on Artificial Intelligence. 2020: 8392-8400. paper
  • Yeh Y, Chen Y. Qainfomax: Learning robust question answering system by mutual information maximization. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. 2019: 3368-3373. paper
  • Zhou M, Huang M, Zhu X. Robust reading comprehension with linguistic constraints via posterior regularization. IEEE Transactions on Audio, Speech, and Language Processing, 2020, 28: 2500-2510. paper
  • Wang C, Jiang H. Explicit utilization of general knowledge in machine reading comprehension. Proceedings of the 57th Conference of the Association for Computational Linguistics. 2019: 2263-2272. paper
  • Mihaylov T, Frank A. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018: 821-832. paper
  • ChenQ, ZhuX, LingZ, et al. Neural natural language inference models enhanced with external knowledge. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018: 2406-2417. paper
  • Feng S, Wallace E, Grissom II A, et al. Pathologies of neural models make interpretations difficult. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018: 3719-3728. paper
  • Yang A, Wang Q, Liu J, et al. Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. Proceedings of the 57th Conference of the Association for Computational Linguistics. 2019: 2346-2357. paper

Ensuring Prediction Safety in Machine Reading Comprehension

Traditional Uncertainty Estimation Methods

  • Hendrycks D, Gimpel K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. The 5th International Conference on Learning Representations. 2017. paper
  • Guo C, Pleiss G, Sun Y, et al. On calibration of modernneural networks. Proceedingsofthe 34th International Conference on Machine Learning. 2017: 1321-1330. paper
  • Kuleshov V, Fenner N, Ermon S. Accurate uncertainties for deep learning using calibrated regression. Proceedings of the 35th International Conference on Machine Learning. 2018: 2801-2809. paper
  • Maroñas J, Paredes R, Ramos D. Calibration of deep probabilistic models with decoupled bayesian neural networks. Neurocomputing, 2020, 407: 194-205. paper
  • Gal Y, Ghahramani Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Proceedings of the 33rd International Conference on Machine Learning. 2016: 1050-1059. paper
  • Kamath A, Jia R, Liang P. Selective question answering under domain shift. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 5684-5696. paper
  • Jeong S, Baek J, Hwang S J, et al. Realistic conversational question answering with answer selection based on calibrated confidence and uncertainty measurement. Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics. 2023. paper
  • Raina V, Gales M J F. Answer uncertainty and unanswerability in multiple-choice machine reading comprehension. Findings of the Association for Computational Linguistics: ACL 2022. 2022: 1020-1034. paper
  • Li M, Li M, Xiong K, et al. Multi-task dense retrieval via model uncertainty fusion for open-domain question answering. Findings of the Association for Computational Linguistics: EMNLP 2021. 2021: 274-287. paper

Methods in Machine Reading Comprehension

  • Hendrycks D, Gimpel K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. The 5th International Conference on Learning Representations. 2017. paper
  • Gonzalez A V, Bansal G, Fan A, et al. Human evaluation of spoken vs. visual explanations for open-domain QA. CoRR, 2020, abs/2012.15075. paper
  • Liusie A, Raina V, Gales MJF. “worldknowledge” in multiple choice reading comprehension. CoRR, 2022, abs/2211.07040. paper
  • Jiang Z, Araki J, Ding H, et al. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 2021, 9: 962-977. paper
  • Si C, Zhao C, Min S, et al. Re-examining calibration: The case of question answering. Findings of the Association for Computational Linguistics: EMNLP 2022. 2022: 2814-2829. paper
  • Kamath A, Jia R, Liang P. Selective question answering under domain shift. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 5684-5696. paper
  • Su L, Guo J, Fan Y, et al. Controlling risk of web question answering. Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2019: 115-124. paper
  • Zhang S, Gong C, Choi E. Knowing more about questions can help: Improving calibration in question answering. Findings of the Association for Computational Linguistics: ACL 2021. 2021: 1958-1970. paper
  • YeX, DurrettG. Can explanations be useful for calibrating blackbox models?. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022: 6199-6212. paper
  • Xing L, Hu Y, Xie Y, et al. Calibration of the multiple choice machine reading comprehension. International Joint Conference on Neural Networks, IJCNN 2022, Padua, Italy, July 18-23, 2022. 2022: 1-8. paper
  • Kumar S. Answer-level calibration for free-form multiple choice question answering. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022: 665-679. paper
  • Jiang Z, Araki J, Ding H, et al. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 2021, 9: 962-977. paper

Advancing Continual Learning in Machine Reading Comprehension

Traditional Continual Learning Methods

  • French RM. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 1999, 3(4): 128-135. paper
  • Yoon J, Yang E, Lee J, et al. Lifelong learning with dynamically expandable networks. The 6th International Conference on Learning Representations. 2018. paper
  • Xu J, Zhu Z. Reinforced continual learning. Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018: 907-916. paper
  • Kirkpatrick J, Pascanu R, Rabinowitz N, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 2017, 114(13): 3521-3526. paper
  • Schwarz J, Czarnecki W, Luketina J, et al. Progress & compress: A scalable framework for continual learning. Proceedings of the 35th International Conference on Machine Learning. 2018: 4535-4544. paper
  • Lopez-Paz D, Ranzato M. Gradient episodic memory for continual learning. Proceedingsof the 31st International Conference on Neural Information Processing Systems. 2017: 6467-6476. paper
  • Chaudhry A, Ranzato M, Rohrbach M, et al. Efficient lifelong learning with A-GEM. The 7th International Conference on Learning Representations. 2019. paper
  • Riemer M, Cases I, Ajemian R, et al. Learning to learn without forgetting by maximizing transfer and minimizing interference. The 7th International Conference on Learning Representations. 2019. paper
  • Wu C, Herranz L, Liu X, et al. Memory replay gans: Learning to generate new categories without forgetting. Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018: 5966-5976. paper
  • Chaudhry A, Dokania PK, Ajanthan T,et al. Riemannian walk for incremental learning: Understanding forgetting and intransigence. Proceedings of the European Conference on Computer Vision. 2018: 556-572. paper
  • Shim D, Mai Z, Jeong J, et al. Online class-incremental continual learning with adversarial shapley value. Thirty-Fifth AAAI Conference on Artificial Intelligence. 2021: 9630-9638. paper
  • Buzzega P, Boschini M, Porrello A, et al. Dark experience for general continual learning: a strong, simple baseline. Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020. paper

Methods in Machine Reading Comprehension

  • Su L, Guo J, Zhang R, et al. Continual domain adaptation for machine reading comprehension. Proceedings of the 29th ACM International Conference on Information and Knowledge Management. 2020: 1395-1404. paper
  • deMasson d’AutumeC, Ruder S, Kong L, et al. Episodic memory in lifelong language learning. Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019: 13122-13131. paper
  • Wang Z, Mehta SV,Póczos B,et al. Efficient meta lifelong-learning with limited memory. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. 2020: 535-548. paper
  • Abujabal A, Roy R S, Yahya M, et al. Never-ending learning for open-domain question answering over knowledge bases. Proceedings of the 2018 World Wide Web Conference. 2018: 1053-1062. paper
  • Echegoyen G, Rodrigo Á, Peñas A. Study of a lifelong learning scenario for question answering. Expert Systems with Applications, 2022, 209: 118271. paper
  • Zhong W, Gao Y, Ding N, et al. Proqa: Structural prompt-based pre-training for unified question answering. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics. 2022: 4230-4243. paper
  • Zheng Y. Continuous qa learning with structured prompts. 2022: arXiv-2208. paper

About

Paper list for Reliable Machine Reading Comprehension, including topics related to robustness, prediction safety, and continual learning.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published