anas-awadalla commited on
Commit
11fc728
1 Parent(s): ef1d867

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -6,7 +6,7 @@ datasets:
6
 
7
  # OpenFlamingo-4B (CLIP ViT-L/14, RedPajama-INCITE-Instruct-3B-v1)
8
 
9
- [Blog post](https://laion.ai/blog/open-flamingo-v2/) | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
10
 
11
  OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
12
  This 4B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and an instruction tuned [RedPajama-3B](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1) language model.
 
6
 
7
  # OpenFlamingo-4B (CLIP ViT-L/14, RedPajama-INCITE-Instruct-3B-v1)
8
 
9
+ [Paper](https://arxiv.org/abs/2308.01390) | [Blog post](https://laion.ai/blog/open-flamingo-v2/) | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
10
 
11
  OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
12
  This 4B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and an instruction tuned [RedPajama-3B](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1) language model.