Edit model card

SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • "Reasoning:\nThe answer provided does not align with the content of the documents. It offers general advice on saving money rather than specific insights from the provided documents that relate to ORGANIZATION's specific guidelines or context about financial prudence or savings.\n\nEvaluation: Bad"
  • 'Reasoning:\nThe answer is correct; it properly identifies several specific pet peeves mentioned in the document, such as sabotaging work, unwanted advances, and derogatory comments. However, the answer contains numerous repetitions of names and accidental insertions of text fragments which make it difficult to read. This detracts from the clarity and quality, despite being factually correct.\n\nEvaluation: Bad'
  • "Reasoning:\nThe answer given does not provide any information or instructions about accessing the company's training resources. Instead, it lists various unrelated methods such as accessing personal documents, managing passwords, and requesting learning budgets, based on the provided documents. The answer does not directly address the question.\n\nEvaluation: Bad"
1
  • 'Reasoning:\nThe answer accurately captures the key points from the document regarding how feedback should be given. It mentions giving feedback at the time of the event, focusing on the situation rather than the person, aiming to help rather than shame, being clear and direct, and showing appreciation. It also covers tips for receiving feedback. The answer presents these points clearly and is aligned with the provided document.\n\nEvaluation: Good'
  • 'Reasoning:\nThe answer effectively captures the reasons for proactively sharing information from high-level meetings, such as providing transparency, ensuring that team members have the necessary context, aligning the team, and fostering a sense of purpose. These points are supported by the provided documents, particularly Document 4.\n\nEvaluation: Good'
  • 'Reasoning:\nThe provided answer accurately describes the procedure for reporting car travel expenses for reimbursement, including precise details such as tracking kilometers and sending details to specific email addresses. This information directly corresponds to the content provided in Document 1.\n\nFinal result: Good'

Evaluation

Metrics

Label Accuracy
all 0.5522

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_newrelic_gpt-4o_improved-cot_chat_few_shot_only_reasoning_1726751494.1082")
# Run inference
preds = model("Reasoning:
The answer given in the response correctly reflects the information in Document 1, which states that questions regarding travel reimbursement should be directed to finance@ORGANIZATION_2.<89312988>. The required email address is present in the document and clearly mentions who to contact.
Evaluation: Good")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 21 47.4462 85
Label Training Sample Count
0 32
1 33

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (5, 5)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0061 1 0.2243 -
0.3067 50 0.2608 -
0.6135 100 0.2456 -
0.9202 150 0.1701 -
1.2270 200 0.0069 -
1.5337 250 0.0026 -
1.8405 300 0.0021 -
2.1472 350 0.002 -
2.4540 400 0.0018 -
2.7607 450 0.0016 -
3.0675 500 0.0015 -
3.3742 550 0.0015 -
3.6810 600 0.0014 -
3.9877 650 0.0014 -
4.2945 700 0.0014 -
4.6012 750 0.0013 -
4.9080 800 0.0013 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.0
  • Transformers: 4.44.0
  • PyTorch: 2.4.1+cu121
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
6
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_newrelic_gpt-4o_improved-cot_chat_few_shot_only_reasoning_1726751494.1082

Finetuned
(249)
this model

Evaluation results