metadata
license: apache-2.0
language:
- it
Model: DistilBERT
Lang: IT
Model description
This is a DistilBERT [1] model for the Italian language, obtained using the multilingual DistilBERT (distilbert-base-multilingual-cased) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in [2], computing document-level frequencies over the Wikipedia dataset)
The resulting model has 67M parameters, a vocabulary of 30.785 tokens, and a size of ~270 MB.
Quick usage
from transformers import BertTokenizerFast, DistilBertModel
tokenizer = DistilBertTokenizerFast.from_pretrained("osiria/distilbert-base-italian-cased")
model = DistilBertModel.from_pretrained("osiria/distilbert-base-italian-cased")
References
[1] https://arxiv.org/abs/1910.01108
[2] https://arxiv.org/abs/2010.05609
License
The model is released under Apache-2.0 license