Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
datasets:
|
4 |
+
- Cognitive-Lab/Kannada-Instruct-dataset
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
- kn
|
8 |
+
library_name: adapter-transformers
|
9 |
+
tags:
|
10 |
+
- kannada
|
11 |
+
- bilingual
|
12 |
+
---
|
13 |
+
|
14 |
+
# Ambari-7B-Instruct-v0.2
|
15 |
+
|
16 |
+
## Overview
|
17 |
+
|
18 |
+
Ambari-7B-Instruct-v0.1 is an extension of the Ambari series, a bilingual English/Kannada model developed and released by [Cognitivelab.in](https://www.cognitivelab.in/). This model is specialized for natural language understanding tasks, particularly in the context of instructional pairs. It is built upon the Ambari-7B-Base-v0.1 model, utilizing a fine-tuning process with a curated dataset of translated instructional pairs.
|
19 |
+
|
20 |
+
## Difference between v0.1 and v0.2
|
21 |
+
The v0.2 version was finetune on the same dataset with all the same parameters but we didnt perform vocabular expansion, it is using the default tokenizer and was trained inorder to evaluate both the models side by side.
|
22 |
+
|
23 |
+
## Usage
|
24 |
+
|
25 |
+
To use the Ambari-7B-Instruct-v0.1 model, you can follow the example code below:
|
26 |
+
|
27 |
+
```python
|
28 |
+
# Usage
|
29 |
+
import torch
|
30 |
+
from transformers import LlamaTokenizer, LlamaForCausalLM
|
31 |
+
|
32 |
+
model = LlamaForCausalLM.from_pretrained('Cognitive-Lab/Ambari-7B-Instruct-v0.2')
|
33 |
+
tokenizer = LlamaTokenizer.from_pretrained('Cognitive-Lab/Ambari-7B-Instruct-v0.2')
|
34 |
+
|
35 |
+
prompt = "Give me 10 Study tips in Kannada."
|
36 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
37 |
+
|
38 |
+
# Generate
|
39 |
+
generate_ids = model.generate(inputs.input_ids, max_length=1000)
|
40 |
+
decoded_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
41 |
+
|
42 |
+
print(decoded_output)
|
43 |
+
```
|
44 |
+
## Learn More
|
45 |
+
|
46 |
+
Read more about Ambari-7B-Instruct-v0.1 and its applications in natural language understanding tasks on the [Cognitivelab.in blog](https://www.cognitivelab.in/blog/introducing-ambari).
|
47 |
+
|
48 |
+
|
49 |
+
## Dataset Information
|
50 |
+
|
51 |
+
The model is fine-tuned using the Kannada Instruct Dataset, a collection of translated instructional pairs. The dataset includes English instruction and output pairs, as well as their corresponding translations in Kannada. The intentional diversification of the dataset, encompassing various language combinations, enhances the model's proficiency in cross-lingual tasks.
|
52 |
+
|
53 |
+
## Bilingual Instruct Fine-tuning
|
54 |
+
|
55 |
+
The model underwent a pivotal stage of supervised fine-tuning with low-rank adaptation, focusing on bilingual instruct fine-tuning. This approach involved training the model to respond adeptly in either English or Kannada based on the language specified in the user prompt or instruction.
|
56 |
+
|
57 |
+
## References
|
58 |
+
|
59 |
+
- [Ambari-7B-Instruct Model](https://huggingface.co/Cognitive-Lab/Ambari-7B-Instruct-v0.1)
|
60 |
+
- [Ambari-7B-Base Model](https://huggingface.co/Cognitive-Lab/Ambari-7B-base-v0.1)
|
61 |
+
- [Kannada-Instruct-Dataset](https://huggingface.co/datasets/Cognitive-Lab/Kannada-Instruct-dataset)
|