File size: 1,812 Bytes
1960c3c
 
79a1f2e
 
 
 
 
1960c3c
79a1f2e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
330fa80
79a1f2e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: apache-2.0
language:
- zh
- en
tags:
- openba
---

# Introduction

OpenBA is an Open-Sourced 15B Bilingual Asymmetric Seq2Seq Model Pre-trained from Scratch.

## Open Source Plan

We are excited to unveil two distinguished versions of our model, with another on the horizon:

- [OpenBA-LM](https://huggingface.co/OpenBA/OpenBA-LM): The backbone language models was pre-trained on 340B English, Chinese, and code tokens. 
- [OpenBA-Flan](https://huggingface.co/OpenBA/OpenBA-Flan): We perform supervised fine-tuning on the base model with additional 40B tokens using our collected BiFlan Dataset.
- OpenBA-Chat: coming soon

## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** zh, en (We also offer the possibility for multilingual learning, by using a multilingual tokenizer.)
- **License:** Apache 2.0
- **Resources for more information:**
  - [Paper](https://arxiv.org/abs/2309.10706)
  - [GitHub Repo](https://github.com/OpenNLG/OpenBA/)

# Usage

## Install requirements

```bash
pip install transformers torch>=2.0 sentencepiece
```

## Demo usage 

```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("OpenBA/OpenBA-LM", trust_remote_code=True)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("OpenBA/OpenBA-LM", trust_remote_code=True).half().cuda()
>>> model = model.eval()
>>> query = "<S>" + "苏州处太湖平原,沿江为高沙平原,河" + "<extra_id_0>"
>>> inputs = tokenizer(query, return_tensors="pt").to("cuda")
>>> outputs = model.generate(**inputs, do_sample=True, max_new_tokens=32)
>>> response = tokenizer.decode(outputs[0], skip_special_tokens=True)
>>> print(response)
流两侧为河淤平原,苏州平原是江苏平原主体,地势低平,土地肥沃,气候温和
```