Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Model

mMiniLM-L12xH384 XLM-R model proposed in MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers that we fine-tune using the direct assessment annotations collected in the Workshop on Statistical Machine Translation (WMT) 2015 to 2020.

This model is much more light weight than the traditional XLM-RoBERTa base and large.

Downloads last month
147
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Unbabel/xlm-roberta-comet-small

Finetunes
1 model