Acknowledge license to access the repository
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
You agree to the Falcon-180B TII license and acceptable use policy.
Log in or Sign Up to review the conditions and access this model content.
π Falcon-180B
Falcon-180B is a 180B parameters causal decoder-only model built by TII and trained on 3,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Falcon-180B TII License and Acceptable Use Policy.
Paper coming soon π
π€ To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading this great blogpost from HF or this one from the release of the 40B!
Note that since the 180B is larger than what can easily be handled with transformers
+acccelerate
, we recommend using Text Generation Inference.
You will need at least 400GB of memory to swiftly run inference with Falcon-180B.
Why use Falcon-180B?
- It is the best open-access model currently available, and one of the best model overall. Falcon-180B outperforms LLaMA-2, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard.
- It features an architecture optimized for inference, with multiquery (Shazeer et al., 2019).
- It is made available under a permissive license allowing for commercial use.
- β οΈ This is a raw, pretrained model, which should be further finetuned for most usecases. If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at Falcon-180B-Chat.
πΈ Looking for a smaller, less expensive model? Falcon-7B and Falcon-40B are Falcon-180B's little brothers!
π₯ Falcon LLMs require PyTorch 2.0 for use with transformers
!
Model Card for Falcon-180B
Model Details
Model Description
- Developed by: https://www.tii.ae;
- Model type: Causal decoder-only;
- Language(s) (NLP): English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- License: Falcon-180B TII License and Acceptable Use Policy.
Model Source
- Paper: coming soon.
Uses
See the acceptable use policy.
Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Bias, Risks, and Limitations
Falcon-180B is trained mostly on English, German, Spanish, French, with limited capabilities also in in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
Recommendations
We recommend users of Falcon-180B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
How to Get Started with the Model
To run inference with the model in full bfloat16
precision you need approximately 8xA100 80GB or equivalent.
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-180b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Training Details
Training Data
Falcon-180B was trained on 3,500B tokens of RefinedWeb, a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile (Gao et al., 2020).
Data source | Fraction | Tokens | Sources |
---|---|---|---|
RefinedWeb-English | 75% | 750B | massive web crawl |
RefinedWeb-Europe | 7% | 70B | European massive web crawl |
Books | 6% | 60B | |
Conversations | 5% | 50B | Reddit, StackOverflow, HackerNews |
Code | 5% | 50B | |
Technical | 2% | 20B | arXiv, PubMed, USPTO, etc. |
RefinedWeb-Europe is made of the following languages:
Language | Fraction of multilingual data | Tokens |
---|---|---|
German | 26% | 18B |
Spanish | 24% | 17B |
French | 23% | 16B |
Italian | 7% | 5B |
Portuguese | 4% | 3B |
Polish | 4% | 3B |
Dutch | 4% | 3B |
Romanian | 3% | 2B |
Czech | 3% | 2B |
Swedish | 2% | 1B |
The data was tokenized with the Falcon tokenizer.
Training Procedure
Falcon-180B was trained on up to 4,096 A100 40GB GPUs, using a 3D parallelism strategy (TP=8, PP=8, DP=64) combined with ZeRO.
Training Hyperparameters
Hyperparameter | Value | Comment |
---|---|---|
Precision | bfloat16 |
|
Optimizer | AdamW | |
Learning rate | 1.25e-4 | 4B tokens warm-up, cosine decay to 1.25e-5 |
Weight decay | 1e-1 | |
Z-loss | 1e-4 | |
Batch size | 2048 | 100B tokens ramp-up |
Speeds, Sizes, Times
Training started in early 2023.
Evaluation
Paper coming soon.
See the OpenLLM Leaderboard for early results.
Technical Specifications
Model Architecture and Objective
Falcon-180B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:
- Positionnal embeddings: rotary (Su et al., 2021);
- Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);
- Decoder-block: parallel attention/MLP with two layer norms.
For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree (so-called multigroup).
Hyperparameter | Value | Comment |
---|---|---|
Layers | 80 | |
d_model |
14848 | |
head_dim |
64 | Reduced to optimise for FlashAttention |
Vocabulary | 65024 | |
Sequence length | 2048 |
Compute Infrastructure
Hardware
Falcon-180B was trained on AWS SageMaker, on up to 4,096 A100 40GB GPUs in P4d instances.
Software
Falcon-180B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
Citation
Paper coming soon π (actually this time). In the meanwhile, you can use the following information to cite:
@article{falcon,
title={The Falcon Series of Language Models: Towards Open Frontier Models},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Alhammadi, Maitha and Daniele, Mazzotta and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
To learn more about the pretraining dataset, see the π RefinedWeb paper.
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
Contact
- Downloads last month
- 3,503