merve HF staff commited on
Commit
c0dc01e
1 Parent(s): 0992a31

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -10
README.md CHANGED
@@ -7,28 +7,29 @@ model-index:
7
  results: []
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
- probably proofread and complete it, then remove this comment. -->
12
 
13
- # chatgpt-prompt-generator-v12
14
 
15
- This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
 
16
  It achieves the following results on the evaluation set:
17
  - Train Loss: 2.4800
18
  - Validation Loss: 2.7320
19
  - Epoch: 4
20
 
21
- ## Model description
22
-
23
- More information needed
24
 
25
  ## Intended uses & limitations
26
 
27
- More information needed
28
 
29
- ## Training and evaluation data
 
30
 
31
- More information needed
 
 
 
 
32
 
33
  ## Training procedure
34
 
 
7
  results: []
8
  ---
9
 
 
 
10
 
11
+ # ChatGPT Prompt Generator v12
12
 
13
+ This model is a fine-tuned version of [BART-large](https://huggingface.co/facebook/bart-large) on a ChatGPT prompts dataset.
14
+ It achieves the following results on the evaluation set:
15
  It achieves the following results on the evaluation set:
16
  - Train Loss: 2.4800
17
  - Validation Loss: 2.7320
18
  - Epoch: 4
19
 
 
 
 
20
 
21
  ## Intended uses & limitations
22
 
23
+ You can use this to generate ChatGPT personas. Simply input a persona like below:
24
 
25
+ ```
26
+ from transformers import BartForConditionalGeneration, BartTokenizer
27
 
28
+ example_english_phrase = "photographer"
29
+ batch = tokenizer(example_english_phrase, return_tensors="pt")
30
+ generated_ids = model.generate(batch["input_ids"], max_new_tokens=150)
31
+ output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
32
+ ```
33
 
34
  ## Training procedure
35