File size: 1,576 Bytes
fc4d252
 
05e5e5f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fc4d252
05e5e5f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
license: mit
datasets:
- chloeliu/reddit_nosleep_posts
language:
- en
tags:
- fun
- horror
- writing
widget:
- text: "[WP] We don't go to ravenholm anymore [RESPONSE] "
  example_title: "Ravenholm"
- text: "[WP] The man in the corner of my window [RESPONSE] "
  example_title: "The man in the corner"
co2_eq_emissions:
  emissions: 70
  source: "https://mlco2.github.io/impact/#compute"
  training_type: "fine-tuning"
  geographical_location: "Oregon, USA"
  hardware_used: "1x T4, Google Colab"
---

# GPT-NoSleep-1.5b
This is the largest release of GPT-NoSleep; a finetuned version of [GPT2-XL](https://huggingface.co/gpt2-xl) on the 'reddit-nosleep-posts' dataset.
Smaller releases include:
 * [GPT-NoSleep-355m](https://huggingface.co/DarwinAnim8or/GPT-NoSleep-355m)

And the accompanying prompt generator can be found here:
 * [Space for prompt generation](https://huggingface.co/spaces/DarwinAnim8or/NoSleepWritingPromptGenerator)
 * [The model](https://huggingface.co/DarwinAnim8or/NoSleepPromptGen)

# Training Procedure
This was trained on the 'reddt-nosleep-posts' dataset, on Google Colab.
This model was trained for 2 epochs with learning rate 1e-2.
Special thanks for Skyler for helping to train this large of a model!

# Biases & Limitations
This likely contains the same biases and limitations as the original GPT2 that it is based on, and additionally heavy biases from the dataset.
It can generate output that is not meant for all audiences, seeing as it's purpose is to generate horror stories. 

# Intended Use
This model is meant for fun, nothing else.