Grok-1-GGUF / README.md
Arki05's picture
Update README.md
a192d30 verified
|
raw
history blame
472 Bytes
---
license: apache-2.0
---
Unofficial GGUF Quantizations of Grok-1. Works with llama.cpp as of [PR- Add grok-1 support #6204](https://github.com/ggerganov/llama.cpp/pull/6204)
The splits now use [PR: llama_model_loader: support multiple split/shard GGUFs](https://github.com/ggerganov/llama.cpp/pull/6187).
Therefore, no merging using `gguf-split` is needed any more.
For now only Q2_K Quant, others (Q3_K, Q4_K, Q5_K, Q6_K, IQ3_S) are prepared waiting to upload.