Update README.md
Browse files
README.md
CHANGED
@@ -57,6 +57,7 @@ Please see below for detailed instructions on reproducing benchmark results.
|
|
57 |
We compare our results to our base Preview2 model (using LM Evaluation Harness).
|
58 |
|
59 |
We find **112%** of the base model's performance on AGI Eval, averaging **0.463**.
|
|
|
60 |
|
61 |
![OpenOrca-Platypus2-13B AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BAGIEval.webp "AGIEval Performance")
|
62 |
|
|
|
57 |
We compare our results to our base Preview2 model (using LM Evaluation Harness).
|
58 |
|
59 |
We find **112%** of the base model's performance on AGI Eval, averaging **0.463**.
|
60 |
+
A large part of this boost is the substantial improvement to LSAT Logical Reasoning performance.
|
61 |
|
62 |
![OpenOrca-Platypus2-13B AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BAGIEval.webp "AGIEval Performance")
|
63 |
|