STEM-AI-mtl
commited on
Commit
•
2c70309
1
Parent(s):
ca67c85
Update README.md
Browse files
README.md
CHANGED
@@ -14,19 +14,22 @@ datasets:
|
|
14 |
|
15 |
---
|
16 |
|
17 |
-
# The fine-tuned ViT model that beats [Google's state-of-the-art model](https://huggingface.co/google/vit-base-patch16-224) and OpenAI's famous GPT4
|
18 |
|
19 |
Image-classification fine-tuned model that identifies which city map is illustrated from an image input.
|
20 |
|
21 |
The Vision Transformer (ViT) base model is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
|
22 |
|
|
|
|
|
|
|
23 |
|
24 |
|
25 |
### How to use:
|
26 |
|
27 |
[Inference script](https://github.com/STEM-ai/Vision/blob/7d92c8daa388eb74e8c336f2d0d3942722fec3c6/ViT_inference.py)
|
28 |
|
29 |
-
For more code examples, we refer to
|
30 |
|
31 |
## Training data
|
32 |
|
|
|
14 |
|
15 |
---
|
16 |
|
17 |
+
# The fine-tuned ViT model that beats [Google's state-of-the-art model](https://huggingface.co/google/vit-base-patch16-224) and OpenAI's famous GPT4 for maps of cities around the world
|
18 |
|
19 |
Image-classification fine-tuned model that identifies which city map is illustrated from an image input.
|
20 |
|
21 |
The Vision Transformer (ViT) base model is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
|
22 |
|
23 |
+
- **Developed by:** STEM.AI
|
24 |
+
- **Model type:** Image classification of maps of cities
|
25 |
+
- **Finetuned from model:** [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224)
|
26 |
|
27 |
|
28 |
### How to use:
|
29 |
|
30 |
[Inference script](https://github.com/STEM-ai/Vision/blob/7d92c8daa388eb74e8c336f2d0d3942722fec3c6/ViT_inference.py)
|
31 |
|
32 |
+
For more code examples, we refer to [ViTdocumentation](https://huggingface.co/transformers/model_doc/vit.html#).
|
33 |
|
34 |
## Training data
|
35 |
|