Update README.md
Browse files
README.md
CHANGED
@@ -12,11 +12,12 @@ The context size has been increased to 4096.
|
|
12 |
|
13 |
The dataset used to fine-tune this model is available [here](https://huggingface.co/airoboros-gpt4), with a specific focus on:
|
14 |
- trivia
|
15 |
-
- math
|
16 |
- coding
|
17 |
- multiple choice and fill-in-the-blank
|
18 |
- context-obedient question answering
|
19 |
- theory of mind
|
|
|
20 |
|
21 |
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
|
22 |
```
|
|
|
12 |
|
13 |
The dataset used to fine-tune this model is available [here](https://huggingface.co/airoboros-gpt4), with a specific focus on:
|
14 |
- trivia
|
15 |
+
- math/reasoning (although it still sucks)
|
16 |
- coding
|
17 |
- multiple choice and fill-in-the-blank
|
18 |
- context-obedient question answering
|
19 |
- theory of mind
|
20 |
+
- misc/general
|
21 |
|
22 |
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
|
23 |
```
|