Edit model card

Llamacpp GGUF Quantizations of SauerkrautLM-1.5b

Original model: https://huggingface.co/VAGOsolutions/SauerkrautLM-1.5b

Prompt format

<|im_start|>system
You are SauerkrautLM, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
<|im_start|>system
Du bist SauerkrautLM, ein hilfreicher KI-Assistent.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
SauerkrautLM-1.5b-f16.gguf f16 3.1GB Highest quality (no quant)
SauerkrautLM-1.5b-Q8_0.gguf Q8_0 1.6GB Extremely high quality, generally unneeded but max available quant.
SauerkrautLM-1.5b-Q6_K.gguf Q6_K 1.3GB Very high quality, near perfect, recommended.
SauerkrautLM-1.5b-Q5_K_M.gguf Q5_K_M 1.1GB High quality, recommended.
SauerkrautLM-1.5b-Q5_K_S.gguf Q5_K_S 1.1GB High quality, recommended.
SauerkrautLM-1.5b-Q4_K_M.gguf Q4_K_M 0.98GB Good quality, uses about 4.83 bits per weight, recommended.
SauerkrautLM-1.5b-Q4_K_S.gguf Q4_K_S 0.94GB Slightly lower quality with more space savings, recommended.

Downloading using huggingface-cli

First, make sure you have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Then, you can target the specific file you want:

huggingface-cli download VAGOsolutions/SauerkrautLM-1.5b.GGUF --include "SauerkrautLM-1.5b-Q4_K_M.gguf" --local-dir ./
Downloads last month
724
GGUF
Model size
1.54B params
Architecture
qwen2

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for VAGOsolutions/SauerkrautLM-1.5b.GGUF

Quantized
(9)
this model