TheBloke commited on
Commit
0591b6b
1 Parent(s): 3885067

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -42
README.md CHANGED
@@ -9,60 +9,31 @@ Quantised 2bit, 4bit and 5bit GGMLs of [changsung's alpaca-lora-65B](https://hug
9
 
10
  I also have 4bit GPTQ files for GPU inference available here: [TheBloke/alpaca-lora-65B-GPTQ-4bit](https://huggingface.co/TheBloke/alpaca-lora-65B-GPTQ-4bit).
11
 
 
 
 
 
 
 
 
 
12
  ## Provided files
13
  | Name | Quant method | Bits | Size | RAM required | Use case |
14
  | ---- | ---- | ---- | ---- | ---- | ----- |
15
- `alpaca-lora-65B.ggml.q2_0.bin` | q2_0 | 2bit | 24.5GB | 27GB | Lowest RAM requirements, minimum quality |
16
- `alpaca-lora-65B.ggml.q4_0.bin` | q4_0 | 4bit | 40.8GB | 43GB | Maximum compatibility |
17
- `alpaca-lora-65B.ggml.q4_2.bin` | q4_2 | 4bit | 40.8GB | 43GB | Best compromise between resources, speed and quality |
18
- `alpaca-lora-65B.ggml.q5_0.bin` | q5_0 | 5bit | 44.9GB | 47GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
19
- `alpaca-lora-65B.ggml.q5_1.bin` | q5_1 | 5bit | 49GB | 51GB | Brand new 5bit method. Slightly higher resource usage than q5_0. |
20
-
21
- * The q2_0 file requires the least resources, but does not have great quality compared to the others.
22
- * It's likely to be better to use a 30B model at 4bit vs a 65B model at 2bit.
23
  * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
24
- * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
25
  * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
26
  * The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
27
 
28
- ## q4_2 compatibility
29
-
30
- q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
31
-
32
- In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
33
-
34
- If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.
35
-
36
- If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
37
-
38
- ## q5_0 and q5_1 compatibility
39
-
40
- These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
41
-
42
- Don't expect any third-party UIs/tools to support them yet.
43
-
44
- ### 2bit q2_0 compatibility
45
-
46
- This file was created using an experimental 2bit method being trialled in [llama.cpp PR 1004](https://github.com/ggerganov/llama.cpp/pull/1004).
47
-
48
- This code is not yet merged into the main `llama.cpp` repo and it is not clear if it ever will be.
49
-
50
- To run this file you need to compile and run the same `llama.cpp` code that was used to create it.
51
-
52
- To checkout this code and compile this version, do the following:
53
- ```
54
- git clone https://github.com/sw/llama.cpp llama-q2q3
55
- cd llama-q2q3
56
- git checkout q2q3
57
- make
58
- ```
59
-
60
  ## How to run in `llama.cpp`
61
 
62
  I use the following command line; adjust for your tastes and needs:
63
 
64
  ```
65
- ./main -t 18 -m alpaca-lora-65B.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
66
  ### Instruction:
67
  Write a story about llamas
68
  ### Response:"
 
9
 
10
  I also have 4bit GPTQ files for GPU inference available here: [TheBloke/alpaca-lora-65B-GPTQ-4bit](https://huggingface.co/TheBloke/alpaca-lora-65B-GPTQ-4bit).
11
 
12
+ ## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
13
+
14
+ llama.cpp recently made a breaking change to its quantisation methods.
15
+
16
+ I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
17
+
18
+ The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
19
+
20
  ## Provided files
21
  | Name | Quant method | Bits | Size | RAM required | Use case |
22
  | ---- | ---- | ---- | ---- | ---- | ----- |
23
+ `alpaca-lora-65B.ggml.q4_0.bin` | q4_0 | 4bit | 40.8GB | 43GB | 4bit. |
24
+ `alpaca-lora-65B.ggml.q5_0.bin` | q5_0 | 5bit | 44.9GB | 47GB | 5bit. Higher quality than 4bit, at cost of slightly higher resources. |
25
+ `alpaca-lora-65B.ggml.q5_1.bin` | q5_1 | 5bit | 49GB | 51GB | Sbit. Slightly higher resource usage and quality than q5_0. |
26
+
 
 
 
 
27
  * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
 
28
  * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
29
  * The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  ## How to run in `llama.cpp`
32
 
33
  I use the following command line; adjust for your tastes and needs:
34
 
35
  ```
36
+ ./main -t 18 -m alpaca-lora-65B.ggml.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
37
  ### Instruction:
38
  Write a story about llamas
39
  ### Response:"