bartowski commited on
Commit
21bf49c
1 Parent(s): 2390c23

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -271,11 +271,9 @@ huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Met
271
  If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
272
 
273
  ```
274
- huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Meta-Llama-3-70B-Instruct-Q8_0.gguf/*" Meta-Llama-3-70B-Instruct-Q8_0 --local-dir-use-symlinks False
275
  ```
276
 
277
- You can leave out the local folder (Meta-Llama-3-70B-Instruct-Q8_0) to just download them all to where you're running the command.
278
-
279
  ## Which file should I choose?
280
 
281
  A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
 
271
  If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
272
 
273
  ```
274
+ huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Meta-Llama-3-70B-Instruct-Q8_0.gguf/*" --local-dir Meta-Llama-3-70B-Instruct-Q8_0 --local-dir-use-symlinks False
275
  ```
276
 
 
 
277
  ## Which file should I choose?
278
 
279
  A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)