migtissera commited on
Commit
6f6102e
1 Parent(s): d6d694b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -7,4 +7,83 @@ All Synthia models are uncensored. Please use it with caution and with best inte
7
  To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
8
  ```
9
  Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ```
 
7
  To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
8
  ```
9
  Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
10
+ ```
11
+
12
+
13
+ ## Example Usage
14
+
15
+ ### Here is prompt format:
16
+
17
+ ```
18
+ SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
19
+ USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
20
+ ASSISTANT:
21
+ ```
22
+
23
+ ### Below shows a code example on how to use this model:
24
+
25
+ ```python
26
+ import torch, json
27
+ from transformers import AutoModelForCausalLM, AutoTokenizer
28
+
29
+ model_path = "migtissera/Synthia-70B-v1.2"
30
+ output_file_path = "./Synthia-70B-conversations.jsonl"
31
+
32
+ model = AutoModelForCausalLM.from_pretrained(
33
+ model_path,
34
+ torch_dtype=torch.float16,
35
+ device_map="auto",
36
+ load_in_8bit=False,
37
+ trust_remote_code=True,
38
+ )
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
41
+
42
+
43
+ def generate_text(instruction):
44
+ tokens = tokenizer.encode(instruction)
45
+ tokens = torch.LongTensor(tokens).unsqueeze(0)
46
+ tokens = tokens.to("cuda")
47
+
48
+ instance = {
49
+ "input_ids": tokens,
50
+ "top_p": 1.0,
51
+ "temperature": 0.75,
52
+ "generate_len": 1024,
53
+ "top_k": 50,
54
+ }
55
+
56
+ length = len(tokens[0])
57
+ with torch.no_grad():
58
+ rest = model.generate(
59
+ input_ids=tokens,
60
+ max_length=length + instance["generate_len"],
61
+ use_cache=True,
62
+ do_sample=True,
63
+ top_p=instance["top_p"],
64
+ temperature=instance["temperature"],
65
+ top_k=instance["top_k"],
66
+ num_return_sequences=1,
67
+ )
68
+ output = rest[0][length:]
69
+ string = tokenizer.decode(output, skip_special_tokens=True)
70
+ answer = string.split("USER:")[0].strip()
71
+ return f"{answer}"
72
+
73
+
74
+ conversation = f"SYSTEM: As a an AI superintelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually."
75
+
76
+
77
+ while True:
78
+ user_input = input("You: ")
79
+ llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
80
+ answer = generate_text(llm_prompt)
81
+ print(answer)
82
+ conversation = f"{llm_prompt}{answer}"
83
+ json_data = {"prompt": user_input, "answer": answer}
84
+
85
+ ## Save your conversation
86
+ with open(output_file_path, "a") as output_file:
87
+ output_file.write(json.dumps(json_data) + "\n")
88
+
89
  ```