File size: 3,217 Bytes
0f4ab11
 
 
5e590b4
 
0f4ab11
71b58ce
0f4ab11
 
 
 
5e590b4
 
9e7c213
5e590b4
0f4ab11
 
 
71b58ce
0f4ab11
 
 
 
 
19216b2
0f4ab11
 
19216b2
0f4ab11
 
 
 
19216b2
0f4ab11
19216b2
0f4ab11
 
19216b2
0f4ab11
 
 
19216b2
0f4ab11
9b41252
 
 
 
 
 
 
 
0f4ab11
 
19216b2
0f4ab11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19216b2
 
 
 
 
 
 
 
 
 
 
 
5e590b4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
tags:
- generated_from_trainer
- code
- coding
model-index:
- name: FalCoder
  results: []
license: apache-2.0
language:
- code
thumbnail: https://huggingface.co/mrm8488/falcoder-7b/resolve/main/falcoder.png
datasets:
- HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text-generation
---

<div style="text-align:center;width:250px;height:250px;">
    <img src="https://huggingface.co/mrm8488/falcoder-7b/resolve/main/falcoder.png" alt="falcoder logo"">
</div>

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# FalCoder πŸ¦…πŸ‘©β€πŸ’»
**Falcon-7b** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.

## Model description 🧠

[Falcon 7B](https://huggingface.co/tiiuae/falcon-7b)


## Training and evaluation data πŸ“š

[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.


### Training hyperparameters βš™

TBA

### Training results πŸ—’οΈ

| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 100  | 0.798500      | 0.767996        |
| 200  | 0.725900      | 0.749880        |
| 300  | 0.669100      | 0.748029        |
| 400  | 0.687300      | 0.742342        |
| 500  | 0.579900      | 0.736735        |



### Example of usage πŸ‘©β€πŸ’»
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer

model_id = "mrm8488/falcoder-7b"

tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda")

def generate(
        instruction,
        max_new_tokens=128,
        temperature=0.1,
        top_p=0.75,
        top_k=40,
        num_beams=4,
        **kwargs
):
    prompt = instruction + "\n### Solution:\n"
    print(prompt)
    inputs = tokenizer(prompt, return_tensors="pt")
    input_ids = inputs["input_ids"].to("cuda")
    attention_mask = inputs["attention_mask"].to("cuda")
    generation_config = GenerationConfig(
        temperature=temperature,
        top_p=top_p,
        top_k=top_k,
        num_beams=num_beams,
        **kwargs,
    )
    with torch.no_grad():
        generation_output = model.generate(
            input_ids=input_ids,
            attention_mask=attention_mask,
            generation_config=generation_config,
            return_dict_in_generate=True,
            output_scores=True,
            max_new_tokens=max_new_tokens,
            early_stopping=True
        )
    s = generation_output.sequences[0]
    output = tokenizer.decode(s)
    return output.split("### Solution:")[1].lstrip("\n")

instruction = "Design a class for representing a person in Python."
print(generate(instruction))
```

### Citation
```
@misc {manuel_romero_2023,
	author       = { {Manuel Romero} },
	title        = { falcoder-7b (Revision e061237) },
	year         = 2023,
	url          = { https://huggingface.co/mrm8488/falcoder-7b },
	doi          = { 10.57967/hf/0789 },
	publisher    = { Hugging Face }
}
```