Update README.md
Browse files
README.md
CHANGED
@@ -153,7 +153,7 @@ print(tokenizer.decode(outputs[0]))
|
|
153 |
|
154 |
## Training Data
|
155 |
|
156 |
-
Falcon-Mamba has been trained with ~
|
157 |
Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192.
|
158 |
Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity.
|
159 |
Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency.
|
|
|
153 |
|
154 |
## Training Data
|
155 |
|
156 |
+
Falcon-Mamba has been trained with ~ 6,000 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated.
|
157 |
Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192.
|
158 |
Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity.
|
159 |
Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency.
|