Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

๊ฐœ์š”

  • ํ•œ๊ตญ์–ด ์š”์•ฝ Task๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

Base Model

Dataset

AI hub์— ์žˆ๋Š” ์•„๋ž˜ ์š”์•ฝ ๋ฐ์ดํ„ฐ ์ค‘ 3๋งŒ๊ฑด์„ ์ƒ˜ํ”Œ๋งํ•˜์—ฌ ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.

๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์„ค์น˜

pip3 install transformers gradio vllm

์˜ˆ์ œ ์ฝ”๋“œ

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
import gradio as gr
import os

model_path = "gangyeolkim/open-llama-2-ko-7b-summarization"
sampling_params = SamplingParams(max_tokens=1024, temperature=0.1)
tokenizer = AutoTokenizer.from_pretrained(model_path) 
 
llm = LLM(model=model_path, tokenizer=model_path, tensor_parallel_size=1) 

def gen(text, history):
    
    text = [
            "### ์›๋ฌธ:",
            f"{text}\n",
            "### ์š”์•ฝ:\n",
        ]

    prompts = "\n".join(text)
    outputs = llm.generate(prompts, sampling_params) 
    
    for output in outputs:
        generated_text = output.outputs[0].text
    return generated_text 
    
demo = gr.ChatInterface(gen)
demo.launch(share=True) 
Downloads last month
18
Safetensors
Model size
6.86B params
Tensor type
F32
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.