Recent Models
Collection
2 items
•
Updated
This model is a fine-tuned model based on the "TinyPixel/Llama-2-7B-bf16-sharded" model and "timdettmers/openassistant-guanaco" dataset. It is optimized for causal language modeling tasks with specific quantization configurations. The model is trained using the PEFT framework and leverages the bitsandbytes
quantization method.
The following bitsandbytes
quantization config was used during training:
The model was trained using PEFT version 0.6.0.dev0.
Base model
TinyPixel/Llama-2-7B-bf16-sharded