Text Generation
Transformers
Safetensors
English
falcon_mamba
Inference Endpoints
4-bit precision
bitsandbytes
ybelkada commited on
Commit
406da0e
1 Parent(s): fcba69a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -204,11 +204,16 @@ Falcon-Mamba-7B was trained an internal distributed training codebase, Gigatron.
204
 
205
  # Citation
206
 
207
- *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
208
  ```
209
- @article{falconmamba,
210
- title={Falcon Mamba: The First Competitive Attention-free 7B Language Model},
211
- author={Zuo, Jingwei and Velikanov, Maksim and Rhaiem, Dhia Eddine and Chahed, Ilyas and Belkada, Younes and Kunsch, Guillaume and Hacid, Hakim},
212
- year={2024}
 
 
 
 
213
  }
214
  ```
 
 
204
 
205
  # Citation
206
 
207
+ You can use the following bibtex citation:
208
  ```
209
+ @misc{zuo2024falconmambacompetitiveattentionfree,
210
+ title={Falcon Mamba: The First Competitive Attention-free 7B Language Model},
211
+ author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid},
212
+ year={2024},
213
+ eprint={2410.05355},
214
+ archivePrefix={arXiv},
215
+ primaryClass={cs.CL},
216
+ url={https://arxiv.org/abs/2410.05355},
217
  }
218
  ```
219
+ ```