Edit model card

This is a fine-tuning of distilbert-base-uncased trained on a small set of manually labeled sentences classified as "instruction" or "problem". The main purpose is to calculate metrics used by the SCBN-RQTL chatbot response evaluation benchmark. The acronym TL stands for Test vs. Learn and is based on the assumption that prompts labeled as 'instruction' typically reflect the user's intent to 'learn,' while prompts labeled as 'problem' are generally aimed at 'testing' the chatbot by challenging it. More information in the GitHub repository here

Downloads last month
9
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for reddgr/tl-test-learn-prompt-classifier

Finetuned
(6637)
this model