Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Aria 40B is based on Open-Assistant Falcon 40B SFT OASST-TOP1 Model.

This model is a fine-tuning of TII's Falcon 40B LLM. It was trained with top-1 (high-quality) demonstrations of the OASST data set (exported on May 6, 2023) with an effective batch size of 144 for ~7.5 epochs with LIMA style dropout (p=0.3) and a context-length of 2048 tokens. We focus on the human preference eval on this model as the most valuable point. We believe that pure technical benchmarks are not important as Human eval,which are our end users.

The finetuned version of Falcon we used as based model has been finetuned on Open assistant dataset and gives better results on human eval than some famous close models as you can see on this article and eval by a third party: https://medium.com/@geronimo7/open-source-chatbots-in-the-wild-9a44d7a41a48

We are currently working on Rope Scaling for context lenght and Chain of Toughts integration,this model card will be updated soon. Stay tuned.

LMYS Eval for Aria 40B base model (Falcon 40B OA) (https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)

Eval Aria 40B base model https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560](Falcon 40B OA)

Model Details

Prompting

Two special tokens are used to mark the beginning of user and assistant turns: <|prompter|> and <|assistant|>. Each turn ends with a <|endoftext|> token.

Input prompt example:

<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>

The input ends with the <|assistant|> token to signal that the model should start generating the assistant reply.

Configuration Details

Model:

falcon-40b:
  dtype: bf16
  log_dir: "falcon_log_40b"
  learning_rate: 5e-6
  model_name: "tiiuae/falcon-40b"
  deepspeed_config: configs/zero3_config_falcon.json
  output_dir: falcon
  weight_decay: 0.0
  max_length: 2048
  warmup_steps: 20
  gradient_checkpointing: true
  gradient_accumulation_steps: 1
  per_device_train_batch_size: 18
  per_device_eval_batch_size: 10
  eval_steps: 80
  save_steps: 80
  num_train_epochs: 8
  save_total_limit: 4
  use_flash_attention: false
  residual_dropout: 0.3
  residual_dropout_lima: true
  sort_by_length: false
  save_strategy: steps

Dataset:

oasst-top1:
  datasets:
    - oasst_export:
        lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
        input_file_path: 2023-05-06_OASST_labels.jsonl.gz
        val_split: 0.05
        top_k: 1
Downloads last month
0
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train Faradaylab/Aria-40B