--- license: mit datasets: - DarwinAnim8or/DMV-Plate-Review language: - en pipeline_tag: text-generation tags: - dmv - fun widget: - text: "PLATE: LCDR" example_title: "Plate LCDR" - text: "PLATE: LUCH" example_title: "Plate LUCH" - text: "PLATE: JJ BINKS" example_title: "Plate JJ BINKS" co2_eq_emissions: emissions: 20 source: "https://mlco2.github.io/impact/#compute" training_type: "fine-tuning" geographical_location: "Oregon, USA" hardware_used: "1 T4, Google Colab" --- # GPT-DMV-125m A finetuned version of [GPT-Neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the 'DMV' dataset. (Linked above) A demo is available [here](https://huggingface.co/spaces/DarwinAnim8or/GPT-DMV-Playground) (I recommend using the demo playground rather than the Inference window on the right here) # Training Procedure This was trained on the 'DMV' dataset, using the "HappyTransformers" library on Google Colab. This model was trained for 5 epochs with learning rate 1e-2. # Biases & Limitations This likely contains the same biases and limitations as the original GPT-Neo-125M that it is based on, and additionally heavy biases from the DMV dataset. # Intended Use This model is meant for fun, nothing else. # Sample Use ```python #Import model: from happytransformer import HappyGeneration happy_gen = HappyGeneration("GPT-NEO", "DarwinAnim8or/GPT-DMV-125m") #Set generation settings: from happytransformer import GENSettings args_top_k = GENSettings(no_repeat_ngram_size=3, do_sample=True,top_k=80, temperature=0.4, max_length=50, early_stopping=False) #Generate a response: result = happy_gen.generate_text("""PLATE: LUCH REVIEW REASON CODE: """, args=args_top_k) print(result) print(result.text) ```