Add evaluation results on the adversarialQA config of adversarial_qa

#3
by autoevaluator HF staff - opened

Beep boop, I am a bot from Hugging Face's automatic model evaluator 👋!
Your model has been evaluated on the adversarialQA config of the adversarial_qa dataset by @mbartolo , using the predictions stored here.
Accept this pull request to see the results displayed on the Hub leaderboard.
Evaluate your model on more datasets here.

Hey @mbartolo 👋
We are going to reject this PR for now because we have trained our model on SQuAD dataset. We believe the comparison of our model with models that were trained on the adversarialQA training dataset would be confusing for users. We would however be very interested to see the performance of models that use this one as a baseline and that have been continued being trained with an adversarialQA dataset.

Tuana changed pull request status to closed

Sign up or log in to comment