How can we fine tune the Models

#15
by roopesh - opened

Hi, Thanks for the Prithvi-100M.

I have a doubt, how can we do a custom fine-tuning using Prithvi-100M would be great to know any examples for it.
And how can we do the data preprocessing for the Prithvi-specific models? Any guidelines would be helpful.

Thanks in Advance

IBM NASA Geospatial org

Hi! You can find examples below.

https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M-burn-scar
https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M-sen1floods11
https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M-multi-temporal-crop-classification

We provide also some more instructions and the general framework we used using mmsegmentation here:

https://github.com/NASA-IMPACT/hls-foundation-os

If you do not want to use the mmsegmentation workflow you can do finetuning by attaching a decoder head to Prithvi's encoder with other framework. For example you can do the same in native Pytorch.

I am curious to know how finetuning from Prithvi-100M can improve your results compared to training from random initialization any paper/ results to share here :) ?
Best!

IBM NASA Geospatial org

Hi, @simonMadec -- we have some results with this type of comparison published at: https://arxiv.org/abs/2310.18660 (Figs. 9 and 10). Best!

Yes ! Thanks I was wondering if there is more comparison than the one published.

IBM NASA Geospatial org

No -- we are working on a benchmark evaluation -- will let you know once it is available.

IBM NASA Geospatial org

We now have open sourced a toolkit to help with fine-tuning, available at: https://github.com/IBM/terratorch

Sign up or log in to comment