|
--- |
|
language: |
|
- id |
|
license: apache-2.0 |
|
tags: |
|
- Indonesian |
|
- Chat |
|
- Instruct |
|
base_model: |
|
- meta-llama/Llama-3.2-3B-Instruct |
|
datasets: |
|
- NekoFi/alpaca-gpt4-indonesia-cleaned |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: FinMatcha-3B-Instruct |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: HuggingFaceH4/ifeval |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: 75.48 |
|
name: strict accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: BBH |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: 23.19 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: hendrycks/competition_math |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: 12.39 |
|
name: exact match |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 2.57 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 5.02 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 24.24 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
![image/jpeg](https://huggingface.co/xMaulana/FinMatcha-3B-Instruct/resolve/main/image.jpg) |
|
|
|
# FinMatcha-3B-Instruct |
|
|
|
FinMatcha is a powerful Indonesian-focused large language model (LLM) fine-tuned from the [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) base model. The model has been trained to handle a variety of conversation, with a special emphasis on understanding and generating Indonesian text. |
|
|
|
This model has been fine-tuned on a wide array of Indonesian datasets, making it adept at handling the nuances of the Indonesian language, from formal to colloquial speech. It also supports English for bilingual applications. |
|
|
|
## Model Details |
|
|
|
- **Finetuned from model**: [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) |
|
- **Dataset**: [NekoFi/alpaca-gpt4-indonesia-cleaned](https://huggingface.co/datasets/NekoFi/alpaca-gpt4-indonesia-cleaned) |
|
- **Model Size**: 3B |
|
- **License**: [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) |
|
- **Languages**: Indonesian, English |
|
|
|
## How to use |
|
|
|
### Installation |
|
|
|
To use the Finmatcha model, install the required dependencies: |
|
|
|
```bash |
|
pip install transformers>=4.45 |
|
``` |
|
|
|
### Usage |
|
[Google Colab](https://colab.research.google.com/drive/14TuDacCjHDadOY9kFkRjvORgU-cEo3D8?usp=sharing) |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_id = "xMaulana/FinMatcha-3B-Instruct" |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.float16, |
|
device_map="auto" |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
|
inputs = tokenizer("Bagaimanakah sebuah negara dapat terbentuk?", return_tensors="pt").to("cuda") |
|
outputs = model.generate(inputs.input_ids, |
|
max_new_tokens = 2048, |
|
pad_token_id=tokenizer.pad_token_id, |
|
eos_token_id=tokenizer.eos_token_id, |
|
temperature=0.7, |
|
do_sample=True, |
|
top_k=5, |
|
top_p=0.9, |
|
repetition_penalty=1.1 |
|
) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
## Limitations |
|
|
|
- The model is primarily focused on the Indonesian language and may not perform as well on non-Indonesian tasks. |
|
- As with all LLMs, cultural and contextual biases can be present. |
|
|
|
## License |
|
|
|
The model is licensed under the [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0). |
|
|
|
## Contributing |
|
|
|
We welcome contributions to enhance and improve Finmatcha. Feel free to open issues or submit pull requests for improvements. |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_xMaulana__FinMatcha-3B-Instruct) |
|
|
|
| Metric |Value| |
|
|-------------------|----:| |
|
|Avg. |23.81| |
|
|IFEval (0-Shot) |75.48| |
|
|BBH (3-Shot) |23.19| |
|
|MATH Lvl 5 (4-Shot)|12.39| |
|
|GPQA (0-shot) | 2.57| |
|
|MuSR (0-shot) | 5.02| |
|
|MMLU-PRO (5-shot) |24.24| |
|
|
|
|