pmysl commited on
Commit
6c854e5
1 Parent(s): 121d1b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -5,10 +5,10 @@ license: cc-by-nc-4.0
5
  # Command R+ GGUF
6
 
7
  ## Description
8
- This repository contains experimental GGUF weights that are currently compatible only with the following fork: [https://github.com/Noeda/llama.cpp/tree/53f71f0026cbed4588b2ad16c51db630d2745794](https://github.com/Noeda/llama.cpp/tree/53f71f0026cbed4588b2ad16c51db630d2745794). I will update them once support for Command R+ is merged into the llama.cpp repository
9
 
10
  ## Concatenating Weights
11
  For every variant (except Q2_K), you must concatenate the weights, as they exceed the 50 GB single file size limit on HuggingFace. You can accomplish this using the `cat` command on Linux (example for the Q3 variant):
12
  ```bash
13
- cat command-r-plus-Q3_K_L-0000* > command-r-plus-Q3_K_L.gguf
14
  ```
 
5
  # Command R+ GGUF
6
 
7
  ## Description
8
+ This repository contains GGUF weights for the `llama.cpp`
9
 
10
  ## Concatenating Weights
11
  For every variant (except Q2_K), you must concatenate the weights, as they exceed the 50 GB single file size limit on HuggingFace. You can accomplish this using the `cat` command on Linux (example for the Q3 variant):
12
  ```bash
13
+ cat command-r-plus-Q3_K_L.gguf.* > command-r-plus-Q3_K_L.gguf
14
  ```