aashish1904 commited on
Commit
6e3f345
1 Parent(s): 82bb518

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +151 -0
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ datasets:
5
+ - cerebras/SlimPajama-627B
6
+ - HuggingFaceH4/ultrachat_200k
7
+ - bigcode/starcoderdata
8
+ - HuggingFaceH4/ultrafeedback_binarized
9
+ language:
10
+ - en
11
+ metrics:
12
+ - accuracy
13
+ - speed
14
+ library_name: transformers
15
+ tags:
16
+ - coder
17
+ - Text-Generation
18
+ - Transformers
19
+ - HelpingAI
20
+ license: mit
21
+ widget:
22
+ - text: |
23
+ <|system|>
24
+ You are a chatbot who can code!</s>
25
+ <|user|>
26
+ Write me a function to search for OEvortex on youtube use Webbrowser .</s>
27
+ <|assistant|>
28
+ - text: |
29
+ <|system|>
30
+ You are a chatbot who can be a teacher!</s>
31
+ <|user|>
32
+ Explain me working of AI .</s>
33
+ <|assistant|>
34
+ model-index:
35
+ - name: HelpingAI-Lite
36
+ results:
37
+ - task:
38
+ type: text-generation
39
+ metrics:
40
+ - name: Epoch
41
+ type: Training Epoch
42
+ value: 3
43
+ - name: Eval Logits/Chosen
44
+ type: Evaluation Logits for Chosen Samples
45
+ value: -2.707406759262085
46
+ - name: Eval Logits/Rejected
47
+ type: Evaluation Logits for Rejected Samples
48
+ value: -2.65652441978546
49
+ - name: Eval Logps/Chosen
50
+ type: Evaluation Log-probabilities for Chosen Samples
51
+ value: -370.129670421875
52
+ - name: Eval Logps/Rejected
53
+ type: Evaluation Log-probabilities for Rejected Samples
54
+ value: -296.073825390625
55
+ - name: Eval Loss
56
+ type: Evaluation Loss
57
+ value: 0.513750433921814
58
+ - name: Eval Rewards/Accuracies
59
+ type: Evaluation Rewards and Accuracies
60
+ value: 0.738095223903656
61
+ - name: Eval Rewards/Chosen
62
+ type: Evaluation Rewards for Chosen Samples
63
+ value: -0.0274422804903984
64
+ - name: Eval Rewards/Margins
65
+ type: Evaluation Rewards Margins
66
+ value: 1.008722543614307
67
+ - name: Eval Rewards/Rejected
68
+ type: Evaluation Rewards for Rejected Samples
69
+ value: -1.03616464138031
70
+ - name: Eval Runtime
71
+ type: Evaluation Runtime
72
+ value: 93.5908
73
+ - name: Eval Samples
74
+ type: Number of Evaluation Samples
75
+ value: 2000
76
+ - name: Eval Samples per Second
77
+ type: Evaluation Samples per Second
78
+ value: 21.37
79
+ - name: Eval Steps per Second
80
+ type: Evaluation Steps per Second
81
+ value: 0.673
82
+
83
+ ---
84
+
85
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
86
+
87
+ # QuantFactory/HelpingAI-Lite-GGUF
88
+ This is quantized version of [OEvortex/HelpingAI-Lite](https://huggingface.co/OEvortex/HelpingAI-Lite) created using llama.cpp
89
+
90
+ # Original Model Card
91
+
92
+
93
+ # HelpingAI-Lite
94
+ # Subscribe to my YouTube channel
95
+ [Subscribe](https://youtube.com/@OEvortex)
96
+
97
+ GGUF version [here](https://huggingface.co/OEvortex/HelpingAI-Lite-GGUF)
98
+
99
+ HelpingAI-Lite is a lite version of the HelpingAI model that can assist with coding tasks. It's trained on a diverse range of datasets and fine-tuned to provide accurate and helpful responses.
100
+
101
+ ## License
102
+
103
+ This model is licensed under MIT.
104
+
105
+ ## Datasets
106
+
107
+ The model was trained on the following datasets:
108
+ - cerebras/SlimPajama-627B
109
+ - bigcode/starcoderdata
110
+ - HuggingFaceH4/ultrachat_200k
111
+ - HuggingFaceH4/ultrafeedback_binarized
112
+
113
+ ## Language
114
+
115
+ The model supports English language.
116
+
117
+ ## Usage
118
+
119
+ # CPU and GPU code
120
+
121
+ ```python
122
+ from transformers import pipeline
123
+ from accelerate import Accelerator
124
+
125
+ # Initialize the accelerator
126
+ accelerator = Accelerator()
127
+
128
+ # Initialize the pipeline
129
+ pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite", device=accelerator.device)
130
+
131
+ # Define the messages
132
+ messages = [
133
+ {
134
+ "role": "system",
135
+ "content": "You are a chatbot who can help code!",
136
+ },
137
+ {
138
+ "role": "user",
139
+ "content": "Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.",
140
+ },
141
+ ]
142
+
143
+ # Prepare the prompt
144
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
145
+
146
+ # Generate predictions
147
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
148
+
149
+ # Print the generated text
150
+ print(outputs[0]["generated_text"])
151
+ ```