Edit model card

KoAlpaca-RealQA-Solar-Ko-Recovery-11B (QLoRA with Unsloth)

Model Description

  • Developed by: Lee Junbum (Beomi)
  • Model type: Instruction Tuned, with beomi/KoAlpaca-RealQA dataset
  • Language(s) (NLP): Korean Mainly, partially English
  • License: Apache 2.0
  • Finetuned from model: beomi/Solar-Ko-Recovery-11B

Model Sources

Direct Use with Unsloth

# pip install -U hf_transfer unsloth
import os

os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" # download speed upto 1000MB/s

import torch
from unsloth import FastLanguageModel
from transformers import TextStreamer


model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B", # YOUR MODEL YOU USED FOR TRAINING
    max_seq_length = 2048,
    dtype = torch.bfloat16,
    load_in_4bit = True,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference

alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Response:
{}"""

def gen(x):
    inputs = tokenizer(
    [
        alpaca_prompt.format(
            x.strip(), # instruction
            "", # output - leave this blank for generation!
        )
    ], return_tensors = "pt").to("cuda")

    text_streamer = TextStreamer(tokenizer)
    _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)

Generation Example

Sample 01

gen("μ•ˆλ…•ν•˜μ„Έμš”")
<s> Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
μ•ˆλ…•ν•˜μ„Έμš”

### Response:
μ•ˆλ…•ν•˜μ„Έμš”! μ–΄λ–»κ²Œ λ„μ™€λ“œλ¦΄κΉŒμš”?</s>

Sample 02

gen("""μ•„λž˜ 글을 ν•œκ΅­μ–΄λ‘œ λ²ˆμ—­ν•΄μ€˜.
Dataset Summary

The KoAlpaca-RealQA dataset is a unique Korean instruction dataset designed to closely reflect real user interactions in the Korean language. Unlike conventional Korean instruction datasets that rely heavily on translated prompts, this dataset is composed of authentic Korean instructions derived from real-world use cases. Specifically, the dataset has been curated from user interactions with the ChatKoAlpaca service, which is based on the KoAlpaca model trained between 2023 and 2024.

This dataset provides a more accurate portrayal of typical Korean user behaviors, questions, and language structures, making it highly relevant for developing language models aimed at understanding and responding to Korean speakers. By leveraging GPT4o to generate high-quality answers, KoAlpaca-RealQA aims to offer a robust resource for training models that need to engage with Korean users in a natural and meaningful way.
""")
<s> Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
μ•„λž˜ 글을 ν•œκ΅­μ–΄λ‘œ λ²ˆμ—­ν•΄μ€˜.
Dataset Summary

The KoAlpaca-RealQA dataset is a unique Korean instruction dataset designed to closely reflect real user interactions in the Korean language. Unlike conventional Korean instruction datasets that rely heavily on translated prompts, this dataset is composed of authentic Korean instructions derived from real-world use cases. Specifically, the dataset has been curated from user interactions with the ChatKoAlpaca service, which is based on the KoAlpaca model trained between 2023 and 2024.

This dataset provides a more accurate portrayal of typical Korean user behaviors, questions, and language structures, making it highly relevant for developing language models aimed at understanding and responding to Korean speakers. By leveraging GPT4o to generate high-quality answers, KoAlpaca-RealQA aims to offer a robust resource for training models that need to engage with Korean users in a natural and meaningful way.

### Response:
KoAlpaca-RealQA 데이터셋은 ν•œκ΅­μ–΄ μ‚¬μš©μžλ“€μ˜ μ‹€μ œ μƒν˜Έμž‘μš©μ„ 맀우 잘 λ°˜μ˜ν•˜λ„λ‘ μ„€κ³„λœ λ…νŠΉν•œ ν•œκ΅­μ–΄ μ§€μ‹œ λ°μ΄ν„°μ…‹μž…λ‹ˆλ‹€. λ²ˆμ—­λœ ν”„λ‘¬ν”„νŠΈμ— 크게 μ˜μ‘΄ν•˜λŠ” 기쑴의 ν•œκ΅­μ–΄ μ§€μ‹œ 데이터셋과 달리, 이 데이터셋은 μ‹€μ œ μ‚¬μš© μ‚¬λ‘€μ—μ„œ 유래된 μ§„μ •ν•œ ν•œκ΅­μ–΄ μ§€μ‹œλ‘œ κ΅¬μ„±λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. 특히, 이 데이터셋은 2023λ…„κ³Ό 2024λ…„ 사이에 ν›ˆλ ¨λœ KoAlpaca λͺ¨λΈμ„ 기반으둜 ν•œ ChatKoAlpaca μ„œλΉ„μŠ€μ™€μ˜ μ‚¬μš©μž μƒν˜Έμž‘μš©μ—μ„œ μˆ˜μ§‘λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

이 데이터셋은 일반적인 ν•œκ΅­μ–΄ μ‚¬μš©μžμ˜ 행동, 질문 및 μ–Έμ–΄ ꡬ쑰λ₯Ό 더 μ •ν™•ν•˜κ²Œ λ¬˜μ‚¬ν•˜μ—¬, ν•œκ΅­μ–΄ μ‚¬μš©μžλ₯Ό μ΄ν•΄ν•˜κ³  μ‘λ‹΅ν•˜λŠ” μ–Έμ–΄ λͺ¨λΈμ„ κ°œλ°œν•˜λŠ” 데 맀우 μœ μš©ν•©λ‹ˆλ‹€. GPT4oλ₯Ό ν™œμš©ν•˜μ—¬ κ³ ν’ˆμ§ˆμ˜ 닡변을 μƒμ„±ν•¨μœΌλ‘œμ¨, KoAlpaca-RealQAλŠ” μžμ—°μŠ€λŸ½κ³  의미 μžˆλŠ” λ°©μ‹μœΌλ‘œ ν•œκ΅­μ–΄ μ‚¬μš©μžμ™€ μƒν˜Έμž‘μš©ν•΄μ•Ό ν•˜λŠ” λͺ¨λΈμ„ ν›ˆλ ¨μ‹œν‚€κΈ° μœ„ν•œ κ°•λ ₯ν•œ μžμ›μ„ λͺ©ν‘œλ‘œ ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.</s>

Sample 03

gen("""KoAlpaca에 λŒ€ν•΄ μ„€λͺ…ν•΄μ€˜.""")
<s> Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
KoAlpaca에 λŒ€ν•΄ μ„€λͺ…ν•΄μ€˜.

### Response:
KoAlpacaλŠ” ν•œκ΅­μ–΄ μžμ—°μ–΄ 처리(NLP) λͺ¨λΈλ‘œ, OpenAI의 GPT-3와 같은 λŒ€ν˜• μ–Έμ–΄ λͺ¨λΈ(LLM)을 기반으둜 ν•©λ‹ˆλ‹€. KoAlpacaλŠ” ν•œκ΅­μ–΄ 데이터λ₯Ό 주둜 μ‚¬μš©ν•˜μ—¬ ν•™μŠ΅λ˜μ—ˆκΈ° λ•Œλ¬Έμ— ν•œκ΅­μ–΄ ν…μŠ€νŠΈλ₯Ό μ΄ν•΄ν•˜κ³  μƒμ„±ν•˜λŠ” 데 νŠΉν™”λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. 이 λͺ¨λΈμ€ λ‹€μ–‘ν•œ ν•œκ΅­μ–΄ μ‘μš© ν”„λ‘œκ·Έλž¨μ—μ„œ ν™œμš©λ  수 있으며, 예λ₯Ό λ“€μ–΄ λŒ€ν™”ν˜• AI, λ²ˆμ—­, μš”μ•½, 질문 λ‹΅λ³€ λ“± μ—¬λŸ¬ λΆ„μ•Όμ—μ„œ μ‚¬μš©λ  수 μžˆμŠ΅λ‹ˆλ‹€.

KoAlpacaλŠ” ν•œκ΅­μ–΄ μ‚¬μš©μžμ—κ²Œ 보닀 μžμ—°μŠ€λŸ½κ³  μœ μ°½ν•œ μ–Έμ–΄ μƒν˜Έμž‘μš©μ„ μ œκ³΅ν•˜λ©°, ν•œκ΅­μ–΄ λ¬Έλ§₯을 잘 μ΄ν•΄ν•˜κ³  μ²˜λ¦¬ν•  수 μžˆλ„λ‘ μ„€κ³„λ˜μ—ˆμŠ΅λ‹ˆλ‹€. μ΄λŸ¬ν•œ λͺ¨λΈμ€ ν•œκ΅­μ–΄ NLP 연ꡬ와 μ‚°μ—…μ—μ„œ μ€‘μš”ν•œ λ„κ΅¬λ‘œ μ‚¬μš©λ  수 μžˆμŠ΅λ‹ˆλ‹€.</s>
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B

Finetuned
(6)
this model

Dataset used to train beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B