Initial commit
Browse files
README.md
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
language:
|
4 |
+
- bg
|
5 |
+
license: mit
|
6 |
+
tags:
|
7 |
+
- torch
|
8 |
+
---
|
9 |
+
|
10 |
+
# LLaMA-7B
|
11 |
+
|
12 |
+
This repo contains a low-rank adapter for LLaMA-7b trained on Bulgarian dataset.
|
13 |
+
|
14 |
+
This model was introduced in [this paper](https://arxiv.org/abs/2106.09685).
|
15 |
+
|
16 |
+
## Model description
|
17 |
+
|
18 |
+
The training data is private Bulgarian dataset.
|
19 |
+
|
20 |
+
## Intended uses & limitations
|
21 |
+
|
22 |
+
This is an instruction-based model similart to ChatGPT but in Bulgarian. The inteded use is research-only.
|
23 |
+
|
24 |
+
### How to use
|
25 |
+
|
26 |
+
Here is how to use this model in PyTorch:
|
27 |
+
|
28 |
+
```bash
|
29 |
+
>>> pip install -r requirements.txt
|
30 |
+
>>>
|
31 |
+
>>> python generate.py \
|
32 |
+
--load_8bit \
|
33 |
+
--base_model 'yahma/llama-7b-hf' \
|
34 |
+
--lora_weights 'rmihaylov/alpaca-lora-bg-7b' \
|
35 |
+
--share_gradio
|
36 |
+
```
|
37 |
+
|
38 |
+
This will download both a base model and an adapter from huggingface. Then it will run a gradio interface for chatting.
|