Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints
File size: 5,186 Bytes
350adda
 
50a65a2
 
dede40d
 
50a65a2
 
350adda
50a65a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
717b40c
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
license: llama2
datasets:
- ehartford/samantha-data
- Open-Orca/OpenOrca
- jondurbin/airoboros-gpt4-1.4.1
language:
- en
---


# FashionGPT-V1

### Introduction

This is a llama-2-70B model combined with two adapters via appropriate methods.

### Dataset

Here is the list of datasets used:

* Orca-style 40K dataset. This dataset is a filtered subset of [OpenOrca-GPT4](<https://huggingface.co/datasets/Open-Orca/OpenOrca/blob/main/1M-GPT4-Augmented.parquet>) and [airoboros-gpt4-1.4.1](<https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1>).
* [Samantha](<https://huggingface.co/datasets/ehartford/samantha-data>) made by Eric Hartford and cleaned by us, about 6.5K samples.

<br>

### Training

* We train our adapters with [jondurbin's forked QLoRA repo](<https://github.com/jondurbin/qlora>)
* We add multi-turn conversational data support from [fastchat repo](<https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train.py>), with minor modifications.
* We use bash shell script similar to [airoboros-70b-gpt4-1.4.1](<https://gist.github.com/jondurbin/87fc040b92a3073125ed516b04bc6e19>) to train our two adapters.
* We found that combining multiple adapters to a single llama-2-70B could achieve better performance than merging only 1 adapter to llama-2-70B. The details of combining multiple adapters will be unveiled in our upcoming paper.

<br>

### Prompt Template

```
### System:
{System}
### User:
{User}
### Assistant:
{Assistant}
```

<br>

### Evaluation

| Metric                | Value |
|-----------------------|-------|
| ARC (25-shot)         | 71.08 |
| HellaSwag (10-shot)   | 87.32 |
| MMLU (5-shot)         | 70.70 |
| TruthfulQA (0-shot)   | 63.92 |
| Avg.                  | 73.26 |

<br>

### license disclaimer

This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.

<br>

### Limitations & Biases

Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Responsible Use Guide available at <https://ai.meta.com/llama/responsible-use-guide/>

<br>

### Citiation:

* airoboros: <https://github.com/jondurbin/airoboros>
* samantha: <https://erichartford.com/meet-samantha>
```bibtex
@misc{mukherjee2023orca,
      title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, 
      author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
      year={2023},
      eprint={2306.02707},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```
```bibtex
@article{dettmers2023qlora,
  title={QLoRA: Efficient Finetuning of Quantized LLMs},
  author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
  journal={arXiv preprint arXiv:2305.14314},
  year={2023}
}
```
```bibtex
@software{touvron2023llama2,
  title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
  author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
 Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
  year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ICBU-NPU__FashionGPT-70B-V1)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 58.85   |
| ARC (25-shot)         | 71.08          |
| HellaSwag (10-shot)   | 87.32    |
| MMLU (5-shot)         | 70.7         |
| TruthfulQA (0-shot)   | 63.92   |
| Winogrande (5-shot)   | 83.66   |
| GSM8K (5-shot)        | 28.13        |
| DROP (3-shot)         | 7.13         |