FalconLLM commited on
Commit
6f1c467
โ€ข
1 Parent(s): 4e8f82c

Add usage recommendations

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -13,6 +13,8 @@ license: apache-2.0
13
 
14
  *Paper coming soon ๐Ÿ˜Š.*
15
 
 
 
16
  ## Why use Falcon-40B-Instruct?
17
 
18
  * **You are looking for a ready-to-use chat/instruct model based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).**
@@ -52,6 +54,10 @@ for seq in sequences:
52
 
53
  ```
54
 
 
 
 
 
55
 
56
 
57
  # Model Card for Falcon-40B-Instruct
 
13
 
14
  *Paper coming soon ๐Ÿ˜Š.*
15
 
16
+ ๐Ÿค— To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
17
+
18
  ## Why use Falcon-40B-Instruct?
19
 
20
  * **You are looking for a ready-to-use chat/instruct model based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).**
 
54
 
55
  ```
56
 
57
+ For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
58
+
59
+ You will need **at least 85-100GB of memory** to swiftly run inference with Falcon-40B.
60
+
61
 
62
 
63
  # Model Card for Falcon-40B-Instruct