Edit model card

Gugugo-koen-7B-V1.1

Detail repo: https://github.com/jwj7140/Gugugo Gugugo

Base Model: Llama-2-ko-7b

Training Dataset: sharegpt_deepl_ko_translation.

I trained with 1x A6000 GPUs for 90 hours.

Prompt Template

KO->EN

### ํ•œ๊ตญ์–ด: {sentence}</๋>
### ์˜์–ด:

EN->KO

### ์˜์–ด: {sentence}</๋>
### ํ•œ๊ตญ์–ด:

Implementation Code

from vllm import LLM, SamplingParams

def make_prompt(data):
    prompts = []
    for line in data:
        prompts.append(f"### ์˜์–ด: {line}</๋>\n### ํ•œ๊ตญ์–ด:")
    return prompts

texts = [
  "Hello world!",
  "Nice to meet you!"
]

prompts = make_prompt(texts)

sampling_params = SamplingParams(temperature=0.01, stop=["</๋>"], max_tokens=700)

llm = LLM(model="squarelike/Gugugo-koen-7B-V1.1-AWQ", quantization="awq", dtype="half")

outputs = llm.generate(prompts, sampling_params)

# Print the outputs.
for output in outputs:
    print(output.outputs[0].text)
Downloads last month
24
Safetensors
Model size
1.25B params
Tensor type
I32
ยท
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train squarelike/Gugugo-koen-7B-V1.1-AWQ