replicating evaluation?
Thanks for this amazing project. I'm looking into replicating a subset of the evaluations shown on the model card. Is the code used to run these evals and generate these graphs available somewhere?
I'm actually not interested in replicating the full eval suite, but thought it would be prudent when fine tuning to determine if this is hurting performance elsewhere. So my thought it to adopt some of the key evaluation metrics on a smaller subset of the datasets (eg: maybe VQA-v2 and ImageNet-1k) and run it before and after - or maybe even periodically during the fine tuning process.
Hi!
At the present moment, our codebase to train and evaluate is not public.
There are plans to make it public although the team needs to take a break ;)
Happy to help if you are reproducing the numbers independently of the codebase!
For vqa, we used the open-ended evaluation meaning that we let the model generate a free form text, and use that as generated answer.
For imagenet, we use rank evaluation meaning that we score all 1_000 possibilities via log-probability under the model, and chose as prediction the option with the highest score.