File size: 2,065 Bytes
05f134b 5ef678b 05f134b 5ef678b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: apache-2.0
datasets:
- DarwinAnim8or/greentext
language:
- en
tags:
- fun
- greentext
widget:
- text: '>be me'
example_title: be me
- text: '>be zoo keeper'
co2_eq_emissions:
emissions: 10
source: https://mlco2.github.io/impact/#compute
training_type: fine-tuning
geographical_location: Oregon, USA
hardware_used: 1x T4, Google Colab
---
# Pythia-Greentext-1.4b
A finetuned version of [Pythia-1.4b](https://huggingface.co/gpt2-xl) on the 'greentext' dataset.
A demo is available [here](https://huggingface.co/spaces/DarwinAnim8or/Pythia-Greentext-Playground)
The demo playground is recommended over the inference box on the right.
This is an alternate take on my "GPT-Greentext" releases.
# Training Procedure
This was trained on the 'greentext' dataset, on Google Colab.
This model was trained for 1 epoch with learning rate 1e-2.
Notably this uses the "prompt" and "completion" style jsonl file, rather than the plain text file found in the greentext dataset.
This nets somewhat better, mostly more consistent results.
# Biases & Limitations
This likely contains the same biases and limitations as the original model that it is based on, and additionally heavy biases from the greentext dataset.
It should be noted that offensive or not PG-output is definitely possible and likely will happen.
# Intended Use
This model is meant for fun, nothing else.
# Noteworthy differences between this model and the others
This model tends to like no_repeat_ngram_size values of 1 or 2; whereas the other models in this series tend to prefer 3.
# Sample Use
```python
#Import model:
from happytransformer import HappyGeneration
happy_gen = HappyGeneration("GPTNEO", "DarwinAnim8or/Pythia-Greentext-1.4b")
#Set generation settings:
from happytransformer import GENSettings
args_top_k = GENSettingsGENSettings(no_repeat_ngram_size=2, do_sample=True, top_k=80, temperature=0.1, max_length=150, early_stopping=False)
#Generate a response:
result = happy_gen.generate_text(""">be me
>""", args=args_top_k)
print(result)
print(result.text)
``` |