About vocab extension

#2
by alielfilali01 - opened

Have you extend the tokenizer's vocabulary or you used just the original tokenizer ? And also about the training, was it like full model training or you went with adding a LoRA adapter on top then merge it ?
Your answer will be much appreciated and tnx for your contribution to the open science 🤗

Unbabel org

Hi, thank you for your interest in Tower.
We did full model training and used the original tokenizer.
We are going to publish a paper on Tower soon, so stay tuned!

jmprcp changed discussion status to closed

Sign up or log in to comment