mpasila commited on
Commit
4991602
1 Parent(s): d0b45fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -30,7 +30,7 @@ LoRA: [mpasila/Viking-Magnum-v0.1-LoRA-7B](https://huggingface.co/mpasila/Viking
30
 
31
  Another thing to note is this was trained with regular LoRA (not quantized/QLoRA) so it should improve the quality a bit. This model's context length is only 4096 so it's trained on that too but I think you can use RoPE with it.
32
 
33
- LoRA rank was 128 and Alpha set to the same.
34
 
35
  # Uploaded model
36
 
 
30
 
31
  Another thing to note is this was trained with regular LoRA (not quantized/QLoRA) so it should improve the quality a bit. This model's context length is only 4096 so it's trained on that too but I think you can use RoPE with it.
32
 
33
+ LoRA rank was 128 and Alpha set to the same. Trained for 1 epoch.
34
 
35
  # Uploaded model
36