MaLA-500
Collection
MaLA-500: Massive Language Adaptation of Large Language Models
•
4 items
•
Updated
MaLA-500 is a novel large language model designed to cover an extensive range of 534 languages. This model builds upon LLaMA 2 7B and integrates continued pretraining with vocabulary extension, with an expanded vocabulary size of 260,164, and LoRA low-rank adaptation.
With vocabulary extension and LoRA modules, the MaLA-500 introduces additional 2.1B trainable parameters, making the total parameters to be 10.7B.
Please refer to our paper for more details.
Requirements:
transformers>=4.36.1
peft>=0.6.2
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained('meta-llama/Llama-2-7b-hf')
base_model.resize_token_embeddings(260164)
tokenizer = AutoTokenizer.from_pretrained('MaLA-LM/mala-500-10b')
model = PeftModel.from_pretrained(base_model, 'MaLA-LM/mala-500-10b')
@misc{lin2024mala500,
title={MaLA-500: Massive Language Adaptation of Large Language Models},
author={Peiqin Lin and Shaoxiong Ji and Jörg Tiedemann and André F. T. Martins and Hinrich Schütze},
year={2024},
eprint={2401.13303},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
meta-llama/Llama-2-7b-hf