YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
Llama-3-8b-sft-mixture - GGUF
- Model creator: https://huggingface.co/OpenRLHF/
- Original model: https://huggingface.co/OpenRLHF/Llama-3-8b-sft-mixture/
Original model description:
library_name: transformers tags: []
Copy from https://huggingface.co/RLHFlow/LLaMA3-SFT
We fixed the
generation_config.json
.
This is the SFT checkpoint used for the project Online-RLHF. Also, check the technical report here.
The model is trained from meta-llama/Meta-Llama-3-8B on a mixture of diverse open-source high-quality data for 1 epoch with detailed parameters in the report. It has not been trained by RLHF and can serve as a good starting point for the RLHF research.
The datasets included: ShareGPT, Evol-Instruct, SlimOrca, MathInstruct, Magicoder-Evol-Instruct, GPT4-LLM, OrcaMath, GPTeacher, UltraInteract.
- Downloads last month
- 29