--- language: - en - de - fr - zh - pt - nl - ru - ko - it - es license: cc-by-nc-4.0 tags: - mlx metrics: - comet pipeline_tag: translation --- # mlx-community/TowerInstruct-v0.1-bfloat16-mlx This model was converted to MLX format from [`Unbabel/TowerInstruct-v0.1`](). Refer to the [original model card](https://huggingface.co/Unbabel/TowerInstruct-v0.1) for more details on the model. ## Intended uses & limitations (from the [original model card](https://huggingface.co/Unbabel/TowerInstruct-v0.1)) The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources: - Translation (sentence and paragraph-level) - Automatic Post Edition - Machine Translation Evaluation - Context-aware Translation - Terminology-aware Translation - Multi-reference Translation - Named-entity Recognition - Paraphrase Generation - Synthetic Chat data - Code instructions You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/TowerInstruct-v0.1-bfloat16-mlx") prompt="Translate the following text from Portuguese into French.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nFrench:" response = generate(model, tokenizer, prompt=prompt, verbose=True) # Un groupe d'investigateurs a lancé un nouveau modèle pour les tâches liées à la traduction. ```