Description
This is the MADE encoder model created by Friedman et al. (2021). This encoder should be used along with the following dataset-specific adapters.
- https://huggingface.co/UKP-SQuARE/MADE_HotpotQA_Adapter
- https://huggingface.co/UKP-SQuARE/MADE_TriviaQA_Adapter
- https://huggingface.co/UKP-SQuARE/MADE_SQuAD_Adapter
- https://huggingface.co/UKP-SQuARE/MADE_SearchQA_Adapter
- https://huggingface.co/UKP-SQuARE/MADE_NewsQA_Adapter
- https://huggingface.co/UKP-SQuARE/MADE_NaturalQuestions_Adapter
The UKP-SQuARE team created this model repository to simplify the deployment of this model on the UKP-SQuARE platform. The GitHub repository of the original authors is https://github.com/princeton-nlp/MADE
Evaluation Results
Friedman et al. (2021) reported the following results:
- SQuAD v1.1: 92.4
- HotoptQA: 81.5
- TriviaQA: 80.5
- NewsQA: 72.1
- SearchQA: 85.8
- NaturalQuestions: 80.9
- Avg: 82.2
Please refer to the original publication for more information.
Citation
Single-dataset Experts for Multi-dataset Question Answering (Friedman et al., EMNLP 2021)
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.