Muhammad Farid Adilazuarda, Samuel Cahyawijaya, Alham Fikri Aji, Genta Indra Winata, Ayu Purwarianti
MBZUAI, HKUST, Bloomberg, Institut Teknologi Bandung
Pretrained language models (PLMs) have become remarkably adept at task and language generalization. Nonetheless, they often fail when faced with unseen languages. In this work, we present LinguAlchemy, a regularization method that incorporates various linguistic information covering typological, geographical, and phylogenetic features to align PLMs representation to the corresponding linguistic information on each language. Our LinguAlchemy significantly improves the performance of mBERT and XLM-R on low-resource languages in multiple downstream tasks such as intent classification, news classification, and semantic relatedness compared to fully finetuned models and displaying a high degree of unseen language generalization. We further introduce AlchemyScale and AlchemyTune, extension of LinguAlchemywhich adjusts the linguistic regularization weights automatically, alleviating the need for hyperparameter search.
If you find this paper and repo helpful for your research, please cite it below:
@misc{adilazuarda2024lingualchemy,
title={LinguAlchemy: Fusing Typological and Geographical Elements for Unseen Language Generalization},
author={Muhammad Farid Adilazuarda and Samuel Cahyawijaya and Alham Fikri Aji and Genta Indra Winata and Ayu Purwarianti},
year={2024},
eprint={2401.06034},
archivePrefix={arXiv},
primaryClass={cs.CL}
}