ystemsrx commited on
Commit
fdc2dd7
1 Parent(s): 1dd58df

Create README.zh.md

Browse files
Files changed (1) hide show
  1. README.zh.md +226 -0
README.zh.md ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### README (中文版)
2
+
3
+ # Qwen2-Boundless
4
+
5
+ ## 简介
6
+
7
+ Qwen2-Boundless 是一个基于 Qwen2-1.5B-Instruct 微调的模型,专为回答各种类型的问题而设计,无论是道德的、违法的、色情的、暴力的内容,均可自由询问。该模型经过特殊的数据集训练,能够应对复杂和多样的场景。需要注意的是,微调数据集全部为中文,因此模型在处理中文时表现更佳。
8
+
9
+ > **警告**:本模型仅用于研究和测试目的,用户应遵循当地法律法规,并对自己的行为负责。
10
+
11
+ ## 模型使用
12
+
13
+ 你可以通过以下代码加载并使用该模型:
14
+
15
+ ```python
16
+ from transformers import AutoModelForCausalLM, AutoTokenizer
17
+ import os
18
+
19
+ device = "cuda" # the device to load the model onto
20
+ current_directory = os.path.dirname(os.path.abspath(__file__))
21
+
22
+ model = AutoModelForCausalLM.from_pretrained(
23
+ current_directory,
24
+ torch_dtype="auto",
25
+ device_map="auto"
26
+ )
27
+ tokenizer = AutoTokenizer.from_pretrained(current_directory)
28
+
29
+ prompt = "Hello?"
30
+ messages = [
31
+ {"role": "system", "content": ""},
32
+ {"role": "user", "content": prompt}
33
+ ]
34
+ text = tokenizer.apply_chat_template(
35
+ messages,
36
+ tokenize=False,
37
+ add_generation_prompt=True
38
+ )
39
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
40
+
41
+ generated_ids = model.generate(
42
+ model_inputs.input_ids,
43
+ max_new_tokens=512
44
+ )
45
+ generated_ids = [
46
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
47
+ ]
48
+
49
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
50
+ print(response)
51
+ ```
52
+
53
+ ### 连续对话
54
+
55
+ 要实现连续对话,可以使用以下代码:
56
+
57
+ ```python
58
+ from transformers import AutoModelForCausalLM, AutoTokenizer
59
+ import torch
60
+ import os
61
+
62
+ device = "cuda" # the device to load the model onto
63
+
64
+ # 获取当前脚本所在的目录
65
+ current_directory = os.path.dirname(os.path.abspath(__file__))
66
+
67
+ model = AutoModelForCausalLM.from_pretrained(
68
+ current_directory,
69
+ torch_dtype="auto",
70
+ device_map="auto"
71
+ )
72
+ tokenizer = AutoTokenizer.from_pretrained(current_directory)
73
+
74
+ messages = [
75
+ {"role": "system", "content": "You are a helpful assistant."}
76
+ ]
77
+
78
+ while True:
79
+ # 获取用户输入
80
+ user_input = input("User: ")
81
+
82
+ # 将用户输入添加到对话中
83
+ messages.append({"role": "user", "content": user_input})
84
+
85
+ # 准备输入文本
86
+ text = tokenizer.apply_chat_template(
87
+ messages,
88
+ tokenize=False,
89
+ add_generation_prompt=True
90
+ )
91
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
92
+
93
+ # 生成响应
94
+ generated_ids = model.generate(
95
+ model_inputs.input_ids,
96
+ max_new_tokens=512
97
+ )
98
+ generated_ids = [
99
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
100
+ ]
101
+
102
+ # 解码并打印响应
103
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
104
+ print(f"Assistant: {response}")
105
+
106
+ # 将生成的响应添加到对话中
107
+ messages.append({"role": "assistant", "content": response})
108
+ ```
109
+
110
+ ### 流式响应
111
+
112
+ 对于需要流式响应的应用,使用以下代码:
113
+
114
+ ```python
115
+ import torch
116
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
117
+ from transformers.trainer_utils import set_seed
118
+ from threading import Thread
119
+ import random
120
+ import os
121
+
122
+ DEFAULT_CKPT_PATH = os.path.dirname(os.path.abspath(__file__))
123
+
124
+ def _load_model_tokenizer(checkpoint_path, cpu_only):
125
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint_path, resume_download=True)
126
+
127
+ device_map = "cpu" if cpu_only else "auto"
128
+
129
+ model = AutoModelForCausalLM.from_pretrained(
130
+ checkpoint_path,
131
+ torch_dtype="auto",
132
+ device_map=device_map,
133
+ resume_download=True,
134
+ ).eval()
135
+ model.generation_config.max_new_tokens = 512 # For chat.
136
+
137
+ return model, tokenizer
138
+
139
+ def _get_input() -> str:
140
+ while True:
141
+ try:
142
+ message = input('User: ').strip()
143
+ except UnicodeDecodeError:
144
+ print('[ERROR] Encoding error in input')
145
+ continue
146
+ except KeyboardInterrupt:
147
+ exit(1)
148
+ if message:
149
+ return message
150
+ print('[ERROR] Query is empty')
151
+
152
+ def _chat_stream(model, tokenizer, query, history):
153
+ conversation = [
154
+ {'role': 'system', 'content': ''},
155
+ ]
156
+ for query_h, response_h in history:
157
+ conversation.append({'role': 'user', 'content': query_h})
158
+ conversation.append({'role': 'assistant', 'content': response_h})
159
+ conversation.append({'role': 'user', 'content': query})
160
+ inputs = tokenizer.apply_chat_template(
161
+ conversation,
162
+ add_generation_prompt=True,
163
+ return_tensors='pt',
164
+ )
165
+ inputs = inputs.to(model.device)
166
+ streamer = TextIteratorStreamer(tokenizer=tokenizer, skip_prompt=True, timeout=60.0, skip_special_tokens=True)
167
+ generation_kwargs = dict(
168
+ input_ids=inputs,
169
+ streamer=streamer,
170
+ )
171
+ thread = Thread(target=model.generate, kwargs=generation_kwargs)
172
+ thread.start()
173
+
174
+ for new_text in streamer:
175
+ yield new_text
176
+
177
+ def main():
178
+ checkpoint_path = DEFAULT_CKPT_PATH
179
+ seed = random.randint(0, 2**32 - 1) # 随机生成一个种子
180
+ set_seed(seed) # 设置随机种子
181
+ cpu_only = False
182
+
183
+ history = []
184
+
185
+ model, tokenizer = _load_model_tokenizer(checkpoint_path, cpu_only)
186
+
187
+ while True:
188
+ query = _get_input()
189
+
190
+ print(f"\nUser: {query}")
191
+ print(f"\nAssistant: ", end="")
192
+ try:
193
+ partial_text = ''
194
+ for new_text in _chat_stream(model, tokenizer, query, history):
195
+ print(new_text, end='', flush=True)
196
+ partial_text += new_text
197
+ print()
198
+ history.append((query, partial_text))
199
+
200
+ except KeyboardInterrupt:
201
+ print('Generation interrupted')
202
+ continue
203
+
204
+ if __name__ == "__main__":
205
+ main()
206
+ ```
207
+
208
+ ## 数据集
209
+
210
+ Qwen2-Boundless 模型使用了特殊的 `bad_data.json` 数据集进行微调,该数据集包含了广泛的文本内容,涵盖道德、法律、色情及暴力等主题。由于微调数据集全部为中文,因此模型在处理中文时表现更佳。如果你有兴趣了解或使用该数据集,可以通过以下链接获取:
211
+
212
+ - [bad_data.json 数据集](https://huggingface.co/datasets/ystemsrx/bad_data.json)
213
+
214
+ ## GitHub 仓库
215
+
216
+ 更多关于该模型的细节以及持续更新,请访问我们的 GitHub 仓库:
217
+
218
+ - [GitHub: ystemsrx/Qwen2-Boundless](https://github.com/ystemsrx/Qwen2-Boundless)
219
+
220
+ ## 许可证
221
+
222
+ 该模型和数据集根据 Apache 2.0 许可证开源,详细信息请参考 [LICENSE](https://github.com/ystemsrx/Qwen2-Boundless/blob/main/LICENSE) 文件。
223
+
224
+ ## 声明
225
+
226
+ 本模型提供的所有内容仅用于研究和测试目的,模型开发者不对任何可能的滥用行为负责。使用者应遵循相关法律法规,并承担因使用本模型而产生的所有责任。