Edit model card

CapybaraHermes-2.5-Mistral-7B

Built with Distilabel

This model is the launching partner of the capybara-dpo dataset build with ⚗️ distilabel. It's a preference tuned OpenHermes-2.5-Mistral-7B.

CapybaraHermes has been preference tuned with LoRA and TRL for 3 epochs using argilla's dpo mix 7k.

To test the impact on multi-turn performance we have used MTBench. We also include the Nous Benchmark results and Mistral-7B-Instruct-v0.2 for reference as it's a strong 7B model on MTBench:

Model AGIEval GPT4All TruthfulQA Bigbench MTBench First Turn MTBench Second Turn Nous avg. MTBench avg.
argilla/CapybaraHermes-2.5-Mistral-7B 43.8 73.35 57.07 42.44 8.24375 7.5625 54.16 7.903125
teknium/OpenHermes-2.5-Mistral-7B 42.75 72.99 52.99 40.94 8.25 7.2875 52.42 7.76875
Mistral-7B-Instruct-v0.2 38.5 71.64 66.82 42.29 7.8375 7.1 54.81 7.46875

The most interesting aspect in the context of the capybara-dpo dataset is the increased performance in MTBench Second Turn scores.

For the merge lovers, we also preference tuned Beagle14-7B with a mix of capybara-dpo and distilabel orca pairs using the same recipe as NeuralBeagle (see YALL - Yet Another LLM Leaderboard for reference):

Model AGIEval GPT4All TruthfulQA Bigbench Average
DistilabelBeagle14-7B 45.29 76.92 71.66 48.78 60.66

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Argilla
  • Shared by [optional]: Argilla
  • Model type: 7B chat model
  • Language(s) (NLP): English
  • License: Same as OpenHermes
  • Finetuned from model [optional]: OpenHermes-2.5-Mistral-7B
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train LoneStriker/CapybaraHermes-2.5-Mistral-7B-8.0bpw-h8-exl2