Update README.md
Browse files
README.md
CHANGED
@@ -94,12 +94,6 @@ top_k: 49
|
|
94 |
|
95 |
## Quantized versions
|
96 |
|
97 |
-
### EXL2
|
98 |
-
|
99 |
-
A 4.250b EXL2 version of the model can be found here:
|
100 |
-
|
101 |
-
https://huggingface.co/oobabooga/CodeBooga-34B-v0.1-EXL2-4.250b
|
102 |
-
|
103 |
### GGUF
|
104 |
|
105 |
TheBloke has kindly provided GGUF quantizations for llama.cpp:
|
|
|
94 |
|
95 |
## Quantized versions
|
96 |
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
### GGUF
|
98 |
|
99 |
TheBloke has kindly provided GGUF quantizations for llama.cpp:
|