Promptriever
Collection
Promptable retrievers and their datasets from the paper "Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models."
•
8 items
•
Updated
This is a reproduced version of the RepLLaMA model. See this thread for details of the reproduction process, which changed from their original version.
Binary | Description |
---|---|
samaya-ai/promptriever-llama2-7b-v1 | A Promptriever bi-encoder model based on LLaMA 2 (7B parameters). |
samaya-ai/promptriever-llama3.1-8b-instruct-v1 | A Promptriever bi-encoder model based on LLaMA 3.1 Instruct (8B parameters). |
samaya-ai/promptriever-llama3.1-8b-v1 | A Promptriever bi-encoder model based on LLaMA 3.1 (8B parameters). |
samaya-ai/promptriever-mistral-v0.1-7b-v1 | A Promptriever bi-encoder model based on Mistral v0.1 (7B parameters). |
samaya-ai/RepLLaMA-reproduced | A reproduction of the RepLLaMA model (no instructions). A bi-encoder based on LLaMA 2, trained on the tevatron/msmarco-passage-aug dataset. |
samaya-ai/msmarco-w-instructions | A dataset of MS MARCO with added instructions and instruction-negatives, used for training the above models. |
You can use this with the RepLLaMA example code in tevatron or with mteb:
import mteb
model = mteb.get_model("samaya-ai/RepLLaMA-reproduced")
tasks = mteb.get_tasks(tasks=["NFCorpus"], languages=["eng"])
evaluation = mteb.MTEB(tasks=tasks)
evaluation.run(model, batch_size=16)
The command used to create this reproduction was the Tevatron codebase (commit 9bb8381) with command:
#!/bin/bash
deepspeed --include localhost:0,1,2,3 --master_port 60000 --module tevatron.retriever.driver.train \
--deepspeed deepspeed/ds_zero3_config.json \
--output_dir retriever-llama2-4gpu \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--lora \
--lora_r 32 \
--lora_target_modules q_proj,k_proj,v_proj,o_proj,down_proj,up_proj,gate_proj \
--save_steps 200 \
--dataset_name Tevatron/msmarco-passage-aug \
--query_prefix "query: " \
--passage_prefix "passage: " \
--bf16 \
--pooling eos \
--append_eos_token \
--normalize \
--temperature 0.01 \
--per_device_train_batch_size 8 \
--gradient_checkpointing \
--train_group_size 16 \
--learning_rate 1e-4 \
--query_max_len 32 \
--passage_max_len 196 \
--num_train_epochs 1 \
--logging_steps 10 \
--overwrite_output_dir \
--warmup_steps 100 \
--gradient_accumulation_steps 4
For citation, please also see the original RepLLaMA paper and feel free to cite Promptriever as well:
@article{weller2024promptriever,
title={Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models},
author={Orion Weller and Benjamin Van Durme and Dawn Lawrie and Ashwin Paranjape and Yuhao Zhang and Jack Hessel},
year={2024},
eprint={2409.11136},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2409.11136},
}
Base model
meta-llama/Llama-2-7b-hf