yjgwak's picture
Update README.md
35b3e2b
|
raw
history blame
No virus
2.98 kB
metadata
language: ko
license: cc-by-sa-4.0
tags:
  - korean
  - klue
  - squad-kor-v1
mask_token: '[MASK]'
widget:
  - text: 대한민국의 수도는 [MASK] 입니다.

KLUE BERT base Finetuned on squad-kor-v1

Table of Contents

Model Details

Model Description: This model is the KLUE BERT base, fine-tuned on the squad-kor-v1 dataset for Korean question answering tasks.

  • Developed by: Yeongjin Gwak
  • Model Type: Transformer-based language model
  • Language(s): Korean
  • License: cc-by-sa-4.0
  • Parent Model: See the KLUE BERT base model for more information about the parent model.

How to Get Started With the Model

from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("yjgwak/klue-bert-base-finetuned-squard-kor-v1")
tokenizer = AutoTokenizer.from_pretrained("yjgwak/klue-bert-base-finetuned-squard-kor-v1")

Uses

Direct Use

This model is specialized for the task of question answering in Korean. Users can employ this model to extract answers from passages or documents in Korean when provided with relevant questions.

Misuse and Out-of-scope Use

The model should not be used for tasks other than question answering without further fine-tuning. Using this model for generating long-form content or for tasks it wasn't fine-tuned on may result in suboptimal results.

Training

Training Data

The model was fine-tuned on the squad-kor-v1 dataset, which is the Korean version of the popular SQuAD dataset used for question answering tasks.

Training Procedure

The original BERT training methodology was adopted with the difference being the dataset used for fine-tuning. The model was trained to minimize the cross-entropy loss between predicted answers and ground truth answers in the squad-kor-v1 dataset.

Evaluation

[Provide details of any evaluation metrics, results, or testing data used to assess the performance of the model after fine-tuning. If this hasn't been done yet, you can mention that the evaluation is pending.]

Technical Specifications

See the original KLUE BERT base model card for details on the underlying architecture and technical specifications.

Citation Information

Please cite the original KLUE paper and any other relevant resources or papers associated with the squad-kor-v1 dataset.


Once you have filled in the specifics (like the developer's name or organization and evaluation details), you can publish this model card alongside your model on the Hugging Face Model Hub.