YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
๊ฐ์
- ํ๊ตญ์ด ์์ฝ Task๋ฅผ ์ํํ๋ ๋ชจ๋ธ์ ๋๋ค.
Base Model
Dataset
AI hub์ ์๋ ์๋ ์์ฝ ๋ฐ์ดํฐ ์ค 3๋ง๊ฑด์ ์ํ๋งํ์ฌ ์ฌ์ฉํ์์ต๋๋ค.
- ์ถ์ ์์ฝ ์ฌ์ค์ฑ ๊ฒ์ฆ ๋ฐ์ดํฐ
- ์์ฝ๋ฌธ ๋ฐ ๋ ํฌํธ ์์ฑ ๋ฐ์ดํฐ
- ๋ฌธ์์์ฝ ํ ์คํธ
- ๋์์๋ฃ ์์ฝ
๋ผ์ด๋ธ๋ฌ๋ฆฌ ์ค์น
pip3 install transformers gradio vllm
์์ ์ฝ๋
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
import gradio as gr
import os
model_path = "gangyeolkim/open-llama-2-ko-7b-summarization"
sampling_params = SamplingParams(max_tokens=1024, temperature=0.1)
tokenizer = AutoTokenizer.from_pretrained(model_path)
llm = LLM(model=model_path, tokenizer=model_path, tensor_parallel_size=1)
def gen(text, history):
text = [
"### ์๋ฌธ:",
f"{text}\n",
"### ์์ฝ:\n",
]
prompts = "\n".join(text)
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
generated_text = output.outputs[0].text
return generated_text
demo = gr.ChatInterface(gen)
demo.launch(share=True)
- Downloads last month
- 18
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.