DarwinAnim8or commited on
Commit
569dd31
1 Parent(s): ea98e14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -1,3 +1,52 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - DarwinAnim8or/greentext
5
+ language:
6
+ - en
7
+ tags:
8
+ - fun
9
+ - greentext
10
+ widget:
11
+ - text: ">be me"
12
+ example_title: "be me"
13
+ co2_eq_emissions:
14
+ emissions: 60
15
+ source: "https://mlco2.github.io/impact/#compute"
16
+ training_type: "fine-tuning"
17
+ geographical_location: "Oregon, USA"
18
+ hardware_used: "1 T4, Google Colab"
19
  ---
20
+
21
+ # GPT-DMV-125m
22
+ A finetuned version of [GPT-Neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the 'greentext' dataset. (Linked above)
23
+ A demo is available [here](#TODO)
24
+
25
+ # Training Procedure
26
+ This was trained on the 'grug' dataset, using the "HappyTransformers" library on Google Colab.
27
+ This model was trained for 15 epochs with learning rate 1e-2.
28
+
29
+ # Biases & Limitations
30
+ This likely contains the same biases and limitations as the original GPT-Neo-125M that it is based on, and additionally heavy biases from the greentext dataset.
31
+ It likely will generate offensive output.
32
+
33
+ # Intended Use
34
+ This model is meant for fun, nothing else.
35
+
36
+ # Sample Use
37
+ ```python
38
+ #Import model:
39
+ from happytransformer import HappyGeneration
40
+ happy_gen = HappyGeneration("GPT-NEO", "DarwinAnim8or/GPT-Greentext-125m")
41
+
42
+ #Set generation settings:
43
+ from happytransformer import GENSettings
44
+ args_top_k = GENSettings(no_repeat_ngram_size=3, do_sample=True,top_k=80, temperature=0.4, max_length=50, early_stopping=False)
45
+
46
+ #Generate a response:
47
+ result = happy_gen.generate_text(""">be me
48
+ >""", args=args_top_k)
49
+
50
+ print(result)
51
+ print(result.text)
52
+ ```