tinyllama-730M-test / README.md
Josephgflowers's picture
Adding Evaluation Results (#1)
a89cc8c verified
---
license: mit
widget:
- text: '<|system|>
You are a helpful assistant</s>
<|user|>
What is your name? Tell me about yourself.</s>
<|assistant|>'
model-index:
- name: tinyllama-730M-test
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 25.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 33.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.9
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
---
I cut my TinyLlama 1.1B cinder v 2 down from 22 layers to 14. At 14 there was no coherent text but there were emerging ideas of a response. 1000 steps on step-by-step dataset.
6000 on Reason-with-cinder. The loss was still over 1 and the learning rate was still over 4. This model needs significat training. I am putting it up as a base model that
needs work. If you continue training please let me know on the tinyllama discord, I have some interesting plans for this model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__tinyllama-730M-test)
| Metric |Value|
|---------------------------------|----:|
|Avg. |29.55|
|AI2 Reasoning Challenge (25-Shot)|25.09|
|HellaSwag (10-Shot) |33.82|
|MMLU (5-Shot) |24.43|
|TruthfulQA (0-shot) |42.90|
|Winogrande (5-shot) |51.07|
|GSM8k (5-shot) | 0.00|