Text Generation
Transformers
Inference Endpoints
medit-xxl / README.md
machineteacher's picture
Update README.md
617cb07
|
raw
history blame
877 Bytes
metadata
license: cc-by-nc-sa-4.0
datasets:
  - wi_locness
  - matejklemen/falko_merlin
  - paws
  - paws-x
  - asset
language:
  - en
  - de
  - es
  - ar
  - ja
  - ko
  - zh
metrics:
  - bleu
  - rouge
  - sari
  - accuracy
library_name: transformers

Model Card for mEdIT-xxl

This model was obtained by fine-tuning the MBZUAI/bactrian-x-llama-13b-lora model on the mEdIT dataset.

Paper: mEdIT: Multilingual Text Editing via Instruction Tuning

Authors: Vipul Raheja, Dimitris Alikaniotis, Vivek Kulkarni, Bashar Alhafni, Dhruv Kumar

Model Details

Model Description

  • Language(s) (NLP): Arabic, Chinese, English, German, Japanese, Korean, Spanish
  • Finetuned from model: MBZUAI/bactrian-x-llama-13b-lora

Model Sources

How to use

We release the best-performing models presented in our paper.