Edit model card

Model Card for IDMGSP-Galactica-TRAIN-CG

A fine-tuned Galactica model to detect machine-generated scientific papers based on their abstract, introduction, and conclusion.

This model is trained on the train-cg dataset found in https://huggingface.co/datasets/tum-nlp/IDMGSP.

this model card is WIP, please check the repository, the dataset card and the paper for more details.

Model Details

Model Description

  • Developed by: Technical University of Munich (TUM)
  • Model type: [More Information Needed]
  • Language(s) (NLP): English
  • License: [More Information Needed]
  • Finetuned from model [optional]: Galactica

Model Sources

Uses

Direct Use

from transformers import AutoTokenizer, OPTForSequenceClassification, pipeline

model = OPTForSequenceClassification.from_pretrained("tum-nlp/IDMGSP-Galactica-TRAIN-CG")
tokenizer = AutoTokenizer.from_pretrained("tum-nlp/IDMGSP-Galactica-TRAIN-CG")
reader = pipeline("text-classification", model=model, tokenizer = tokenizer)
reader(
'''
Abstract:
....

Introduction:
....

Conclusion:
...'''
)

Downstream Use [optional]

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Training Details

Training Data

The training dataset comprises scientific papers generated by the Galactica, GPT-2, and SCIgen models, as well as papers extracted from the arXiv database.

The provided table displays the sample counts from each source utilized in constructing the training dataset. The dataset could be found in https://huggingface.co/datasets/tum-nlp/IDMGSP.

Dataset arXiv (real) ChatGPT (fake) GPT-2 (fake) SCIgen (fake) Galactica (fake) GPT-3 (fake)
TRAIN without ChatGPT (TRAIN-CG) 8k - 2k 2k 2k -

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

[More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train tum-nlp/IDMGSP-Galactica-TRAIN-CG