File size: 746 Bytes
86918a8
 
 
 
 
 
4caa150
86918a8
4caa150
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
license: cc-by-nc-4.0
---

# Command R+ GGUF

## Description
This repository contains experimental GGUF weights that are currently compatible only with the following fork: [https://github.com/Noeda/llama.cpp/tree/53f71f0026cbed4588b2ad16c51db630d2745794](https://github.com/Noeda/llama.cpp/tree/53f71f0026cbed4588b2ad16c51db630d2745794). I will update them once support for Command R+ is merged into the llama.cpp repository

## Concatenating Weights
For every variant (except Q2_K), you must concatenate the weights, as they exceed the 50 GB single file size limit on HuggingFace. You can accomplish this using the `cat` command on Linux (example for the Q3 variant):
```bash
cat command-r-plus-Q3_K_L-0000* > command-r-plus-Q3_K_L.gguf
```