Skip to content

HZWHH/Awesome-Multilingual-LLMs-Papers

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 

Repository files navigation

Awesome-Multilingual-LLMs-Papers

This repository contains list of papers according to our survey:

Multilingual Large Language Models: A Systematic Survey

Shaolin Zhu1, Supryadi1, Shaoyang Xu1, Haoran Sun1, Leiyu Pan1, Menglong Cui1,

Jiangcun Du1, Renren Jin1, António Branco2†, Deyi Xiong1†*

1TJUNLP Lab, College of Intelligence and Computing, Tianjin University

2NLX, Department of Informatics, University of Lisbon

(*: Corresponding author, †: Advisory role)

Papers

Multilingual Corpora

Pretraining Datasets

  1. "CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages".

    Thuat Nguyen et al. LREC-COLING 2024. [Paper]

  2. "RedPajama: an Open Dataset for Training Large Language Models".

    Maurice Weber et al. arXiv 2024. [Paper] [GitHub]

  3. "The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset".

    Hugo Laurençon et al. NeurIPS 2022. [Paper] [GitHub]

  4. "Zyda: A 1.3T Dataset for Open Language Modeling".

    Yury Tokpanov et al. arXiv 2024. [Paper] [GitHub]

SFT Datasets

  1. "Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning".

    Shivalika Singh et al. ACL 2024. [Paper]

  2. "Bactrian-X: Multilingual Replicable Instruction-Following Models with Low-Rank Adaptation".

    Haonan Li et al. arXiv 2023. [Paper] [GitHub]

  3. "CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society".

    Guohao Li et al. NeurIPS 2023. [Paper] [GitHub]

  4. "OpenAssistant Conversations - Democratizing Large Language Model Alignment".

    Andreas Köpf et al. NeurIPS 2023. [Paper] [GitHub]

  5. "Phoenix: Democratizing ChatGPT across Languages".

    Zhihong Chen et al. arXiv 2023. [Paper] [GitHub]

  6. "Crosslingual Generalization through Multitask Finetuning".

    Niklas Muennighoff et al. ACL 2023. [Paper] [GitHub]

RLHF Datasets

  1. "Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena".

    Lianmin Zheng et al. NeurIPS 2023. [Paper] [GitHub]

  2. "OpenAssistant Conversations - Democratizing Large Language Model Alignment".

    Andreas Köpf et al. NeurIPS 2023. [Paper] [GitHub]

Multilingual Tuning

Basic Tuning Strategies

Instruction Tuning
  1. "Finetuned Language Models Are Zero-Shot Learners".

    Jason Wei et al. ICLR 2022. [Paper] [GitHub]

  2. "Multitask Prompted Training Enables Zero-Shot Task Generalization".

    Victor Sanh et al. ICLR 2022. [Paper] [GitHub]

  3. "Training language models to follow instructions with human feedback".

    Long Ouyang et al. NeurIPS 2022. [Paper] [GitHub]

  4. "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback".

    Yuntao Bai et al. arXiv 2022. [Paper]

  5. "A General Language Assistant as a Laboratory for Alignment".

    Amanda Askell et al. arXiv 2021. [Paper]

  6. "Self-Instruct: Aligning Language Models with Self-Generated Instructions".

    Yizhong Wang et al. ACL 2023. [Paper] [GitHub]

  7. "WizardLM: Empowering Large Language Models to Follow Complex Instructions".

    Can Xu et al. ICLR 2024. [Paper] [GitHub]

  8. "WizardCoder: Empowering Code Large Language Models with Evol-Instruct".

    Ziyang Luo et al. ICLR 2024. [Paper] [GitHub]

  9. "Self-Alignment with Instruction Backtranslation".

    Xian Li et al. ICLR 2024. [Paper]

  10. "Instruction Tuning With Loss Over Instructions".

    Zhengyan Shi et al. NeurIPS 2024. [Paper] [GitHub]

  11. "Instruction Fine-Tuning: Does Prompt Loss Matter?".

    Mathew Huerta-Enochian et al. EMNLP 2024. [Paper]

  12. "Instruction Tuning for Large Language Models: A Survey".

    Shengyu Zhang et al. arXiv 2023. [Paper] [GitHub]

Preference Tuning
  1. "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback".

    Yuntao Bai et al. arXiv 2022. [Paper]

  2. "Training language models to follow instructions with human feedback".

    Long Ouyang et al. NeurIPS 2022. [Paper] [GitHub]

  3. "Fine-Tuning Language Models from Human Preferences".

    Daniel M. Ziegler et al. arXiv 2019. [Paper] [GitHub]

  4. "Learning to summarize from human feedback".

    Nisan Stiennon et al. NeurIPS 2022. [Paper] [GitHub]

  5. "WebGPT: Browser-assisted question-answering with human feedback".

    Reiichiro Nakano et al. arXiv 2021. [Paper]

  6. "Training language models to follow instructions with human feedback".

    Long Ouyang et al. NeurIPS 2022. [Paper] [GitHub]

  7. "Direct Preference Optimization: Your Language Model is Secretly a Reward Model".

    Rafael Rafailov et al. NeurIPS 2023. [Paper]

  8. "Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons".

    Ralph Allan Bradley and Milton E. Terry Biometrika 1952. [Paper]

  9. "The Impact of Preference Agreement in Reinforcement Learning from Human Feedback: A Case Study in Summarization".

    Sian Gooding and Hassan Mansoor arXiv 2023. [Paper]

  10. "Understanding the Effects of RLHF on LLM Generalisation and Diversity".

    Robert Kirk et al. ICLR 2024. [Paper] [GitHub]

  11. "Proximal Policy Optimization Algorithms".

    John Schulman et al. arXiv 2017. [Paper]

  12. "A General Theoretical Paradigm to Understand Learning from Human Preferences".

    Mohammad Gheshlaghi Azar et al. AISTATS 2024. [Paper]

  13. "Preference Ranking Optimization for Human Alignment".

    Feifan Song et al. AAAI 2024. [Paper] [GitHub]

  14. "RRHF: Rank Responses to Align Language Models with Human Feedback without tears".

    Zheng Yuan et al. NeurIPS 2023. [Paper] [GitHub]

  15. "KTO: Model Alignment as Prospect Theoretic Optimization".

    Kawin Ethayarajh et al. ICML 2024. [Paper]

  16. "SLiC-HF: Sequence Likelihood Calibration with Human Feedback".

    Yao Zhao et al. arXiv 2023. [Paper]

  17. "β-DPO: Direct Preference Optimization with Dynamic β".

    Junkang Wu et al. NeurIPS 2024. [Paper] [GitHub]

  18. "SimPO: Simple Preference Optimization with a Reference-Free Reward".

    Yu Meng et al. NeurIPS 2024. [Paper] [GitHub]

  19. "Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint".

    Wei Xiong et al. ICML 2024. [Paper]

Direct Multilingual Tuning

Multilingual Tuning Data Collection
  1. "Crosslingual Generalization through Multitask Finetuning".

    Niklas Muennighoff et al. ACL 2023. [Paper] [Github]

  2. "Bactrian-X: A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation".

    Haonan Li et al. arXiv 2023. [Paper] [Github]

  3. "Phoenix: Democratizing ChatGPT across Languages".

    Zhihong Chen et al. arXiv 2023. [Paper] [Github]

  4. "PolyLM: An Open Source Polyglot Large Language Model".

    Xiangpeng Wei et al. arXiv 2023. [Paper] [Github]

  5. "SeaLLMs -- Large Language Models for Southeast Asia".

    Xuan-Phi Nguyen et al. ACL 2024 DEMO TRACK. [Paper] [Github]

  6. "Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model".

    Ahmet Üstün et al. arXiv 2024. [Paper] [Huggingface]

  7. "Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback".

    Viet Dac Lai et al. EMNLP 2023 DEMO TRACK. [Paper] [Github]

Cross-Lingual Transfer Elicitation
  1. "Multilingual Instruction Tuning With Just a Pinch of Multilinguality".

    Uri Shaham et al. ACL 2024 Findings. [Paper]

  2. "Zero-shot cross-lingual transfer in instruction tuning of large language models".

    Nadezhda Chirkova et al. INLG 2024. [Paper]

  3. "Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca".

    Pinzhen Chen et al. EACL 2024 Findings. [Paper] [Github]

  4. "Investigating Multilingual Instruction-Tuning: Do Polyglot Models Demand for Multilingual Instructions?".

    Alexander Arno Weber et al. EMNLP 2024. [Paper] [Github]

  5. "Turning English-centric LLMs Into Polyglots: How Much Multilinguality Is Needed?".

    Tannon Kew et al. EMNLP 2024 Findings. [Paper] [Github]

  6. "Lucky 52: How Many Languages Are Needed to Instruction Fine-Tune Large Language Models?".

    Shaoxiong Ji et al. arXiv 2024. [Paper] [Huggingface

  7. "Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment".

    Zhaofeng Wu et al. EMNLP 2024. [Paper]

  8. "The Language Barrier: Dissecting Safety Challenges of LLMs in Multilingual Contexts".

    Lingfeng Shen et al. ACL Findings 2024. [Paper] [Github]

Multilingual Tuning Augmented by Cross-Lingual Alignment

Translation-Assisted Tuning
  1. "Empowering Cross-lingual Abilities of Instruction-tuned Large Language Models by Translation-following demonstrations".

    Leonardo Ranaldi et al. ACL Findings 2024. [Paper] [Github]

  2. "Extrapolating Large Language Models to Non-English by Aligning Languages".

    Wenhao Zhu et al. arXiv 2023. [Paper]

  3. "The Power of Question Translation Training in Multilingual Reasoning: Broadened Scope and Deepened Insights".

    Wenhao Zhu et al. arXiv 2024. [Paper]

  4. "Question Translation Training for Better Multilingual Reasoning".

    Wenhao Zhu et al. ACL 2024 Findings. [Paper] [Github]

  5. "BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language Models".

    Shaolei Zhang et al. arXiv 2023. [Paper] [Github]

  6. "InstructAlign: High-and-Low Resource Language Alignment via Continual Crosslingual Instruction Tuning".

    Samuel Cahyawijaya et al. SEALP 2023. [Paper]

Cross-Lingual Tuning
  1. "xCoT: Cross-lingual Instruction Tuning for Cross-lingual Chain-of-Thought Reasoning".

    Linzheng Chai et al. arXiv 2024. [Paper]

  2. "PLUG: Leveraging Pivot Language in Cross-Lingual Instruction Tuning".

    Zhihan Zhang et al. ACL 2024. [Paper] [Github]

  3. "TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes".

    Bibek Upadhayay et al. arXiv 2023. [Paper] [Github]

  4. "MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization".

    Shuaijie She et al. ACL 2024. [Paper] [Github]

Enhancement of Specific Multilingual Abilities

Adaptation to New Languages
  1. "Efficient and Effective Text Encoding for Chinese LLaMA and Alpaca".

    Yiming Cui et al. arXiv 2023. [Paper] [Github]

  2. "Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models".

    Seungduk Kim et al. arXiv 2024. [Paper]

  3. "Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities".

    Kazuki Fujii et al. COLM 2024. [Paper]

  4. "MaLA-500: Massive Language Adaptation of Large Language Models".

    Peiqin Lin et al. arXiv 2024. [Paper]

  5. "SeaLLMs -- Large Language Models for Southeast Asia".

    Xuan-Phi Nguyen et al. ACL 2024 DEMO TRACK. [Paper] [Github]

  6. "LangBridge: Multilingual Reasoning Without Multilingual Supervision".

    Dongkeun Yoon et al. ACL 2024. [Paper] [Github]

  7. "RomanSetu: Efficiently unlocking multilingual capabilities of Large Language Models via Romanization".

    Jaavid Aktar Husain et al. ACL 2024. [Paper] [Github]

  8. "BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting".

    Zheng Xin Yong et al. ACL 2023. [Paper] [Github]

  9. "LLaMA Beyond English: An Empirical Study on Language Capability Transfer".

    Jun Zhao et al. arXiv 2024. [Paper]

  10. "Rethinking LLM language adaptation: A case study on chinese mixtral".

    Yiming Cui et al. arXiv 2024. [Paper] [Github]

Machine Translation
  1. "Towards Robust In-Context Learning for Machine Translation with Large Language Models".

    Shaolin Zhu et al. LREC 2024. [Paper]

  2. "LANDeRMT: Dectecting and Routing Language-Aware Neurons for Selectively Finetuning LLMs to Machine Translation".

    Shaolin Zhu et al. ACL 2024. [Paper]

  3. "FEDS-ICL: Enhancing translation ability and efficiency of large language model by optimizing demonstration selection".

    Shaolin Zhu et al. Information Processing & Management 2024. [Paper]

  4. "Efficiently Exploring Large Language Models for Document-Level Machine Translation with In-context Learning".

    Menglong Cui et al. ACL 2024 Findings. [Paper]

  5. "DUAL-REFLECT: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback Mechanisms".

    Andong Chen et al. ACL 2024. [Paper] [Github]

  6. "Paying More Attention to Source Context: Mitigating Unfaithful Translations from Large Language Model".

    Hongbin Zhang et al. ACL 2024 Findings. [Paper] [Github]

  7. "BigTranslate: Augmenting Large Language Models with Multilingual Translation Capability over 100 Languages".

    Wen Yang et al. arXiv 2023. [Paper] [Github]

  8. "A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models".

    Haoran Xu et al. ICLR 2024. [Paper] [Github

  9. "Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions".

    Jiahuan Li et al. TACL 2024. [Paper]

  10. "Fine-Tuning Large Language Models to Translate: Will a Touch of Noisy Data in Misaligned Languages Suffice?".

    Dawei Zhu et al. EMNLP 2024. [Paper] [Github]

  11. "Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation".

    Haoran Xu et al. ICML 2024. [Paper] [Github]

  12. "Advancing Translation Preference Modeling with RLHF: A Step Towards Cost-Effective Solution".

    Nuo Xu et al. arXiv 2024. [Paper]

  13. "Word Alignment as Preference for Machine Translation".

    Qiyu Wu et al. EMNLP 2024. [Paper]

  14. "Teaching Large Language Models to Translate with Comparison".

    Jiali Zeng et al. AAAI 2024. [Paper] [Github]

  15. "Improving Translation Faithfulness of Large Language Models via Augmenting Instructions".

    Yijie Chen et al. arXiv 2023. [Paper] [Github]

  16. "Towards Boosting Many-to-Many Multilingual Machine Translation with Large Language Models".

    Pengzhi Gao et al. arXiv 2024. [Paper] [Github

  17. "Tuning LLMs with Contrastive Alignment Instructions for Machine Translation in Unseen, Low-resource Languages".

    Zhuoyuan Mao et al. LoResMT 2024. [Paper]

  18. "Relay Decoding: Concatenating Large Language Models for Machine Translation".

    Chengpeng Fu et al. arXiv 2024. [Paper]

  19. "m3P: Towards Multimodal Multilingual Translation with Multimodal Prompt".

    Jian Yang et al. LREC 2024. [Paper] [Huggingface]

Cultural Adaptation
  1. "CultureLLM: Incorporating Cultural Differences into Large Language Models".

    Cheng Li et al. NeurIPS 2024. [Paper] [Github]

  2. "CulturePark: Boosting Cross-cultural Understanding in Large Language Models".

    Cheng Li et al. NeurIPS 2024. [Paper][Github]

  3. "Self-Pluralising Culture Alignment for Large Language Models".

    Shaoyang Xu et al. arXiv 2024. [Paper] [Github]

  4. "Global Gallery: The Fine Art of Painting Culture Portraits through Multilingual Instruction Tuning".

    Anjishnu Mukherjee et al. NAACL 2024. [Paper] [Github]

  5. "The Echoes of Multilinguality: Tracing Cultural Value Shifts during LM Fine-tuning".

    Rochelle Choenni et al. ACL 2024. [Paper]

  6. "CRAFT: Extracting and Tuning Cultural Instructions from the Wild".

    Bin Wang et al. ACL 2024 Workshop - C3NLP. [Paper] [Github]

Multilingual Evaluation

Tokenizer Evaluation

  1. "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models".

    Phillip Rust and Jonas Pfeiffer et al. ACL-IJCNLP 2021. [Paper] [GitHub]

  2. "ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models".

    Linting Xue, Aditya Barua, Noah Constant, and Rami Al-Rfou et al. TACL 2023. [Paper] [GitHub]

  3. "Language Model Tokenizers Introduce Unfairness Between Languages".

    Aleksandar Petrov et al. NeurIPS 2023. [Paper] [GitHub]

  4. "Tokenizer Choice For LLM Training: Negligible or Crucial?".

    Mehdi Ali, Michael Fromm, and Klaudia Thellmann et al. NAACL (Findings) 2024. [Paper]

Multilingual Evaluation Benchmarks and Datasets

Multilingual Holistic Evaluation
  1. "MEGA: Multilingual Evaluation of Generative AI".

    Kabir Ahuja et al. EMNLP 2023. [Paper] [GitHub]

  2. "MEGAVERSE: Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks".

    Sanchit Ahuja et al. arXiv 2024. [Paper]

  3. "ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models".

    Viet Dac Lai, Nghia Trung Ngo, and Amir Pouran Ben Veyseh et al. EMNLP (Findings) 2023. [Paper]

Multilingual Task-Specific Evaluation
Translation Evaluation
  1. "Investigating the Translation Performance of a Large Multilingual Language Model: the Case of BLOOM".

    Rachel Bawden et al. EAMT 2023. [Paper] [GitHub]

  2. "Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis".

    Wenhao Zhu et al. NAACL (Findings) 2024. [Paper] [GitHub]

Question Answering Evaluation
  1. "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models".

    Wenxuan Zhang et al. NeurIPS 2023. [Paper] [GitHub]

  2. "Evaluating the Elementary Multilingual Capabilities of Large Language Models with MULTIQ".

    Carolin Holtermann and Paul Röttger et al. ACL (Findings) 2024. [Paper] [GitHub]

Summarization Evaluation
  1. "SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation".

    Elizabeth Clark et al. EMNLP 2023. [Paper] [GitHub]

Dialogue Evaluation
  1. "xDial-Eval: A Multilingual Open-Domain Dialogue Evaluation Benchmark".

    Chen Zhang et al. EMNLP (Findings) 2023. [Paper] [GitHub]

  2. "MEEP: Is this Engaging? Prompting Large Language Models for Dialogue Evaluation in Multilingual Settings".

    Amila Ferron et al. EMNLP (Findings) 2023. [Paper] [GitHub]

Multilingual Alignment Evaluation
Multilingual Ethics Evaluation
  1. "Ethical Reasoning and Moral Value Alignment of LLMs Depend on the Language we Prompt them in".

    Utkarsh Agarwal, Kumar Tanmay, and Aditi Khandelwal et al. LREC-COLING 2024. [Paper]

Multilingual Toxicity Evaluation
  1. "RTP-LX: Can LLMs Evaluate Toxicity in Multilingual Scenarios?".

    Adrian de Wynter et al. arXiv 2024. [Paper] [GitHub]

  2. "PolygloToxicityPrompts: Multilingual Evaluation of Neural Toxic Degeneration in Large Language Models".

    Devansh Jain and Priyanshu Kumar et al. COLM 2024. [Paper] [GitHub]

Multilingual Bias Evaluation
  1. "On Evaluating and Mitigating Gender Biases in Multilingual Settings".

    Aniket Vashishtha and Kabir Ahuja et al. ACL (Findings) 2021. [Paper] [GitHub]

Multilingual Safety Evaluation
Multilingual Safety Benchmarks
  1. "All Languages Matter: On the Multilingual Safety of LLMs".

    Wenxuan Wang et al. ACL (Findings) 2024. [Paper] [GitHub]

Multilingual Jailbreaking/Red-Teaming
  1. "Low-Resource Languages Jailbreak GPT-4".

    Zheng-Xin Yong et al. NeurIPS (Workshop) 2023. [Paper]

  2. "Multilingual Jailbreak Challenges in Large Language Models".

    Yue Deng et al. ICLR 2024. [Paper] [GitHub]

  3. "A Cross-Language Investigation into Jailbreak Attacks in Large Language Models".

    Jie Li et al. arXiv 2024. [Paper]

Multilingualism Evaluation

  1. "How Vocabulary Sharing Facilitates Multilingualism in LLaMA?".

    Fei Yuan et al. ACL (Findings) 2024. [Paper] [GitHub]

MLLMs as Multilingual Evaluator

  1. "Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation?".

    Rishav Hada et al. EACL (Findings) 2024. [Paper] [GitHub]

  2. "METAL: Towards Multilingual Meta-Evaluation".

    Rishav Hada and Varun Gumma et al. NAACL (Findings) 2024. [Paper] [GitHub]

Interpretability

Interpretability of Multilingual Capabilities

Model-Wide Interpretation
  1. "How do Large Language Models Handle Multilingualism?".

    Zhao Y, Zhang W, Chen G, et al. arXiv 2024. [Paper]

  2. "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".

    Wendler C, Veselovsky V, Monea G, et al. ACL 2024. [Paper]

  3. "Analyzing the Mono- and Cross-Lingual Pretraining Dynamics of Multilingual Language Models".

    Blevins T, Gonen H, Zettlemoyer L. EMNLP 2022. [Paper]

Component-Based Interpretation
  1. "Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks".

    Bhattacharya S, Bojar O. Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP 2023. [Paper]

Neuron-Level Interpretation
  1. "Unveiling Linguistic Regions in Large Language Models".

    Zhang Z, Zhao J, Zhang Q, et al. ACL 2024. [Paper]

  2. "Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models".

    Tang T, Luo W, Huang H, et al. ACL 2024. [Paper]

  3. "Unraveling Babel: Exploring Multilingual Activation Patterns of LLMs and Their Applications".

    Liu W, Xu Y, Xu H, et al. EMNLP 2024. [Paper]

  4. "On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons".

    Kojima T, Okimura I, Iwasawa Y, et al. NAACL 2024. [Paper]

Representation-Driven Interpretation
  1. "The Geometry of Multilingual Language Model Representations".

    Chang T, Tu Z, Bergen B. EMNLP 2022. [Paper]

  2. "Language-agnostic Representation from Multilingual Sentence Encoders for Cross-lingual Similarity Estimation".

    Tiyajamorn N, Kajiwara T, Arase Y, et al. EMNLP 2021. [Paper]

  3. "An Isotropy Analysis in the Multilingual BERT Embedding Space".

    Rajaee S, Pilehvar M T. ACL 2022. [Paper]

  4. "Discovering Low-rank Subspaces for Language-agnostic Multilingual Representations".

    Xie Z, Zhao H, Yu T, et al. EMNLP 2022. [Paper]

  5. "Emerging Cross-lingual Structure in Pretrained Language Models".

    Conneau A, Wu S, Li H, et al. ACL 2020. [Paper]

  6. "Probing LLMs for Joint Encoding of Linguistic Categories".

    Starace G, Papakostas K, Choenni R, et al. EMNLP 2023. [Paper]

  7. "Morph Call: Probing Morphosyntactic Content of Multilingual Transformers".

    Mikhailov V, Serikov O, Artemova E. Proceedings of the Third Workshop on Computational Typology and Multilingual NLP 2021. [Paper]

  8. "Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models".

    Stanczak K, Ponti E, Hennigen L T, et al. NAACL 2022. [Paper]

  9. "Probing Cross-Lingual Lexical Knowledge from Multilingual Sentence Encoders".

    Vulić I, Glavaš G, Liu F, et al. EACL 2023. [Paper]

  10. "The Emergence of Semantic Units in Massively Multilingual Models".

    de Varda A G, Marelli M. LREC-COLING 2024. [Paper]

  11. "X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models".

    Jiang Z, Anastasopoulos A, Araki J, et al. EMNLP 2020. [Paper]

  12. "Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models".

    Kassner N, Dufter P, Schütze H. EACL 2021. [Paper]

  13. "Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models".

    Qi J, Fernández R, Bisazza A. EMNLP 2023. [Paper]

  14. "Language Representation Projection: Can We Transfer Factual Knowledge across Languages in Multilingual Language Models?".

    Xu S, Li J, Xiong D. EMNLP 2023. [Paper]

Interpretability of Cross-lingual Transfer

  1. "Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization"

    Xu N, Zhang Q, Ye J, et al. EMNLP 2023. [Paper]

  2. "When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer".

    Deshpande A, Talukdar P, Narasimhan K. NAACL 2022. [Paper]

  3. "Emerging Cross-lingual Structure in Pretrained Language Models".

    Conneau A, Wu S, Li H, et al. ACL 2020. [Paper]

  4. "Cross-Lingual Ability of Multilingual BERT: An Empirical Study".

    Karthikeyan K, Wang Z, Mayhew S, et al. ICLR 2020. [Paper]

  5. "Unveiling Linguistic Regions in Large Language Models".

    Zhang Z, Zhao J, Zhang Q, et al. ACL 2024. [Paper]

  6. "Unraveling Babel: Exploring Multilingual Activation Patterns of LLMs and Their Applications".

    Liu W, Xu Y, Xu H, et al. EMNLP 2024. [Paper]

  7. "The Geometry of Multilingual Language Model Representations".

    Chang T, Tu Z, Bergen B. EMNLP 2022. [Paper]

Interpretability of Linguistic Bias

  1. "How do Large Language Models Handle Multilingualism?".

    Zhao Y, Zhang W, Chen G, et al. arXiv 2024. [Paper]

  2. "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".

    Wendler C, Veselovsky V, Monea G, et al. ACL 2024. [Paper]

Applications of MLLMs

MLLMs for Biology and Medicine

  1. "Biobert: a pre-trained biomedical language representation model for biomedical text mining."

    Jinhyuk Lee et al. Bioinform 2020. [paper] [github]

  2. "DNABERT: pre-trained bidirectional encoder representations from transformers model for dna-language in genome."

    Yanrong Ji et al. Bioinform 2021. [paper] [github]

  3. "DNABERT-2: efficient foundation model and benchmark for multi-species genome."

    Zhihan Zhou et al. arXiv 2023. [paper] [github]

  4. "MING-MOE: enhancing medical multi-task learning in large language models with sparse mixture of low-rank adapter experts."

    Yusheng Liao et al. arXiv 2024. [paper] [github]

  5. "Doctorglm: Fine-tuning your chinese doctor is not a herculean task."

    Honglin Xiong et al. arXiv 2023. [paper] [github]

  6. "Huatuogpt, towards taming language model to be a doctor."

    Hongbo Zhang et al. EMNLP 2023 [paper] [github]

  7. "Medgpt: Medical concept prediction from clinical narratives."

    Zeljko Kraljevic et al. arXiv 2021 [paper]

  8. "Clinicalgpt: Large language models finetuned with diverse medical data and comprehensive evaluation."

    Guangyu Wang et al. arXiv 2023 [paper]

  9. "Ivygpt: Interactive chinese pathway language model in medical domain."

    Rongsheng Wang et al. CICAI 2023 [paper] [github]

  10. "Bianque: Balancing the questioning and suggestion ability of health llms with multi-turn health conversations polished by chatgpt."

    Yirong Chen et al. arXiv 2023 [paper] [github]

  11. "Soulchat: Improving llms’ empathy, listening, and comfort abilities through fine-tuning with multi-turn empathy conversations."

    Yirong Chen et al. EMNLP 2023 [paper] [github]

  12. "Towards expert-level medical question answering with large language models."

    Karan Singhal et al. arXiv 2023 [paper] [github]

  13. "Chatdoctor: A medical chat model fine-tuned on llama model using medical domain knowledge."

    Yunxiang Li et al. arXiv 2023 [paper] [[github](Chatdoctor: A medical chat model fine-tuned on llama model using medical domain knowledge)]

MLLMs for Computer Science

  1. "Codebert: A pre-trained model for programming and natural languages."

    Zhangyin Feng et al. EMNLP 2020 [paper] [github]

  2. "Learning and evaluating contextual embedding of source code."

    Aditya Kanade et al. ICML 2020 [paper] [github]

  3. "Unified pre-training for program understanding and generation."

    Wasi Uddin Ahmad et al. NAACL-HLT 2021 [paper] [github]

  4. "Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation."

    Yue Wang et al. EMNLP 2021 [paper] [github]

  5. "Codet5+: Open code large language models for code understanding and generation."

    Yue Wang et al. EMNLP 2023 [paper] [github]

  6. "Competition-level code generation with alphacode"

    Yujia Li et al. arXiv 2022 [paper] [github]

  7. "Evaluating large language models trained on code."

    Mark Chen et al. arXiv 2021 [paper] [github]

  8. "A systematic evaluation of large language models of code."

    Frank F. Xu et al. MAPS 2022 [paper] [github]

  9. "Codegen: An open large language model for code with multi-turn program synthesis."

    Erik Nijkamp et al. ICLR 2023 [paper] [github]

  10. "A generative model for code infilling and synthesis."

    Daniel Fried et al. ICLR 2023 [paper] [github]

  11. "Code llama: Open foundation models for code."

    Baptiste Rozière et al. arXiv 2023 [paper] [github]

  12. "Starcoder: may the source be with you!"

    Raymond Li et al. arXiv 2023 [paper] [github]

  13. "CodeGeeX: A pretrained model for code generation with multilingual benchmarking on humaneval-x."

    Qinkai Zheng et al. KDD 2023 [paper] [github]

  14. "Codeshell technical report."

    Rui Xie et al. arXiv 2024 [paper] [github]

  15. "CodeGemma: Open Code Models Based on Gemma"

    CodeGemma Team et al. arXiv 2024 [paper] [github]

  16. "Qwen2.5-Coder Technical Report."

    Binyuan Hui et al. arXiv 2024 [paper] [github]

MLLMs for Mathematics

  1. "Tree-based representation and generation of natural and mathematical language."

    Alexander Scarlatos et al. ACL 2023 [paper] [github]

  2. "Chatglm-math: Improving math problem-solving in large language models with a self-critique pipeline."

    Yifan Xu et al. arXiv 2024 [paper] [github]

  3. "Deepseek-math: Pushing the limits of mathematical reasoning in open language models."

    Zhihong Shao et al. arXiv 2024 [paper] [github]

  4. "Metamath: Bootstrap your own mathematical questions for large language models."

    Longhui Yu et al. arXiv 2024 [paper] [github]

  5. "Mammoth: Building math generalist models through hybrid instruction tuning."

    Xiang Yue et al. arXiv 2023 [paper] [github]

  6. "Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct."

    Haipeng Luo et al. arXiv 2023 [paper] [github]

  7. "Generative ai for math: Abel."

    Ethan Chern et al. GitHub 2023 [github]

  8. "Orca-math: Unlocking the potential of slms in grade school math."

    Arindam Mitra et al. arXiv 2024 [paper] [github]

MLLMs for Law

  1. "LEGAL-BERT: the muppets straight out of law school."

    Ilias Chalkidis et al. arXiv 2020 [paper] [github]

  2. "Lawformer: A pre-trained language model for chinese legal long documents."

    Chaojun Xiao et al. AI Open 2021 [paper] [github]

  3. "A brief report on lawgpt 1.0: A virtual legal assistant based on GPT-3."

    Ha-Thanh Nguyen arXiv 2023 [paper]

  4. "Disc-lawllm: Fine-tuning large language models for intelligent legal services."

    Shengbin Yue et al. arXiv 2023 [paper] [github]

  5. "Chatlaw: Open-source legal large language model with integrated external knowledge bases."

    Jiaxi Cui et al. arXiv 2023 [paper] [github]

  6. "SAILER: structure-aware pre-trained language model for legal case retrieval."

    Haitao Li et al. SIGIR 2023 [paper] [github]

  7. "Lawyer llama technical report."

    Quzhe Huang et al. arXiv 2023 [paper] [github]

  8. "Legal-relectra: Mixeddomain language modeling for long-range legal text comprehension."

    Wenyue Hua et al. arXiv 2022 [paper]

About

Awesome-Multilingual-LLMs-Papers

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published