TianyiQ's picture
Upload folder using huggingface_hub
677ac83 verified
|
raw
history blame
No virus
2.13 kB
metadata
license: other
base_model: meta-llama/Meta-Llama-3-8B
tags:
  - llama-factory
  - full
  - generated_from_trainer
model-index:
  - name: C020_random_sample_llama3-8b-base_pretrain_20240505_135320
    results: []

C020_random_sample_llama3-8b-base_pretrain_20240505_135320

This model is a fine-tuned version of /data/pro-align/progressalign/shared_storage/downloaded_models/llama3-8b-base on the C020_random_sample_data dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9418

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: polynomial
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 4.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.9087 0.4032 200 1.9717
1.8752 0.8065 400 1.9418
1.6383 1.2097 600 1.9440
1.7073 1.6129 800 1.9435
1.6699 2.0161 1000 1.9428
1.7212 2.4194 1200 1.9445
1.7346 2.8226 1400 1.9443
1.7028 3.2258 1600 1.9448
1.7383 3.6290 1800 1.9450

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.3.0
  • Datasets 2.19.0
  • Tokenizers 0.19.1