Edit model card

llm-data-textbook-quality-fasttext-classifier-v1

Model is built on fasttext. It is an optimised version of llm-data-textbook-quality-classifier-v1.
Not just it results in a higher F1 score, but also it can classify more than 2000 examples per second in CPU.
This model can classify if a text is of textbook quality data. It can be used as a filter for data curation when training a LLM.
Please note textbook quality is a subset of high quality.

Model Performance

Dataset F1 Score
Train 0.8695
Test 0.8485

Usage

from typing import List
import re
from huggingface_hub import hf_hub_download
import fasttext


model = fasttext.load_model(hf_hub_download("kenhktsui/llm-data-textbook-quality-fasttext-classifer-v1", "model.bin"))


def replace_newlines(text: str) -> str:
  return re.sub("\n+", " ", text)


def predict(text_list: List[str]) -> List[dict]:
  text_list = [replace_newlines(text) for text in text_list]
  pred = model.predict(text_list)
  return [{"label": l[0].lstrip("__label__"), "score": s[0]}
           for l, s in zip(*pred)]


predict(["Hi"])
# Output: [{'label': 'LOW_QUALITY', 'score': 1.00001}]

Benchmark

Average Quality Score is defined as the average probability output of HIGH_QUALITY. The classifier aligns with the expectation. Textbook category scores the highest, reflecting the effectiveness of this model. Wikipedia scores lower because it is not textbook after all. Web scores the lowest.

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including kenhktsui/llm-data-textbook-quality-fasttext-classifier-v1