praeclarumjj3
commited on
Commit
•
f9a3370
1
Parent(s):
34961fb
Update README.md
Browse files
README.md
CHANGED
@@ -14,13 +14,13 @@ widget:
|
|
14 |
|
15 |
OneFormer model trained on the Cityscapes dataset (large-sized version, Dinat backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer).
|
16 |
|
17 |
-
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/oneformer_teaser.png)
|
18 |
|
19 |
## Model description
|
20 |
|
21 |
OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model.
|
22 |
|
23 |
-
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/oneformer_architecture.png)
|
24 |
|
25 |
## Intended uses & limitations
|
26 |
|
|
|
14 |
|
15 |
OneFormer model trained on the Cityscapes dataset (large-sized version, Dinat backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer).
|
16 |
|
17 |
+
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/oneformer_teaser.png)
|
18 |
|
19 |
## Model description
|
20 |
|
21 |
OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model.
|
22 |
|
23 |
+
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/oneformer_architecture.png)
|
24 |
|
25 |
## Intended uses & limitations
|
26 |
|