context-only-question-generator
Model description
This model is a sequence-to-sequence question generator which takes context as an input, and generates a question as an output.
It is based on a pretrained bart-base
model.
How to use
Inputs should be organised into the following format:
context
The input sequence can then be encoded and passed as the input_ids
argument in the model's generate()
method.
- Downloads last month
- 66
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.