Update README.md
Browse files
README.md
CHANGED
@@ -202,9 +202,11 @@ A version using the transformers library is also available here: https://hugging
|
|
202 |
- Params: 85.8M (base)
|
203 |
- Image size: 224 x 224 x 3
|
204 |
- Patch size: 16 x 16 x 3
|
|
|
|
|
|
|
205 |
- **Papers:**
|
206 |
- [Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling](https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2)
|
207 |
-
- **Dataset:** Pancancer40M, created from [TCGA-COAD](https://portal.gdc.cancer.gov/repository?facetTab=cases&filters=%7B%22content%22%3A%5B%7B%22content%22%3A%7B%22field%22%3A%22cases.project.project_id%22%2C%22value%22%3A%5B%22TCGA-COAD%22%5D%7D%2C%22op%22%3A%22in%22%7D%2C%7B%22content%22%3A%7B%22field%22%3A%22files.experimental_strategy%22%2C%22value%22%3A%5B%22Diagnostic%20Slide%22%5D%7D%2C%22op%22%3A%22in%22%7D%5D%2C%22op%22%3A%22and%22%7D&searchTableTab=cases)
|
208 |
- **Original:** https://github.com/owkin/HistoSSLscaling
|
209 |
- **License:** [Owkin non-commercial license](https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt)
|
210 |
|
@@ -233,8 +235,8 @@ model = timm.create_model(
|
|
233 |
data_config = timm.data.resolve_model_data_config(model)
|
234 |
transforms = timm.data.create_transform(**data_config, is_training=False)
|
235 |
|
236 |
-
|
237 |
-
output = model(
|
238 |
```
|
239 |
|
240 |
## Citation
|
|
|
202 |
- Params: 85.8M (base)
|
203 |
- Image size: 224 x 224 x 3
|
204 |
- Patch size: 16 x 16 x 3
|
205 |
+
- **Pre-training:**
|
206 |
+
- Dataset: Pancancer40M, created from [TCGA-COAD](https://portal.gdc.cancer.gov/repository?facetTab=cases&filters=%7B%22content%22%3A%5B%7B%22content%22%3A%7B%22field%22%3A%22cases.project.project_id%22%2C%22value%22%3A%5B%22TCGA-COAD%22%5D%7D%2C%22op%22%3A%22in%22%7D%2C%7B%22content%22%3A%7B%22field%22%3A%22files.experimental_strategy%22%2C%22value%22%3A%5B%22Diagnostic%20Slide%22%5D%7D%2C%22op%22%3A%22in%22%7D%5D%2C%22op%22%3A%22and%22%7D&searchTableTab=cases)
|
207 |
+
- Framework: [iBOT](https://github.com/bytedance/ibot), self-supervised, masked image modeling, self-distillation
|
208 |
- **Papers:**
|
209 |
- [Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling](https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2)
|
|
|
210 |
- **Original:** https://github.com/owkin/HistoSSLscaling
|
211 |
- **License:** [Owkin non-commercial license](https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt)
|
212 |
|
|
|
235 |
data_config = timm.data.resolve_model_data_config(model)
|
236 |
transforms = timm.data.create_transform(**data_config, is_training=False)
|
237 |
|
238 |
+
data = transforms(img).unsqueeze(0) # input is a (batch_size, num_channels, img_size, img_size) shaped tensor
|
239 |
+
output = model(data) # output is a (batch_size, num_features) shaped tensor
|
240 |
```
|
241 |
|
242 |
## Citation
|