BLIP for RSICD image captioning:
blip-image-captioning-base
model has been finetuned on thersicd
dataset. Training parameters used are as follows:- learning_rate = 5e-7
- optimizer = AdamW
- scheduler = ReduceLROnPlateau
- epochs = 5
- More details (demo, testing, evaluation, metrics) available at
github repo
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.