Puma-3B / README.md
acrastt's picture
Update README.md
9af4ca1
|
raw
history blame
955 Bytes
metadata
license: apache-2.0
datasets:
  - totally-not-an-llm/sharegpt-hyperfiltered-3k
language:
  - en
library_name: transformers
pipeline_tag: text-generation

Buy Me A Coffee

This is OpenLLaMA 3B V2 finetuned on ShareGPT Hyperfiltered for 1 epochs.

Prompt template:

### HUMAN:
{prompt}

### RESPONSE:
<leave a newline for the model to answer>

GGML quants available here.
GPTQ quants available here.

Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please!