Transformers
Inference Endpoints
jncraton commited on
Commit
88b6821
1 Parent(s): f115134

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ - af
5
+ - am
6
+ - ar
7
+ - ast
8
+ - az
9
+ - ba
10
+ - be
11
+ - bg
12
+ - bn
13
+ - br
14
+ - bs
15
+ - ca
16
+ - ceb
17
+ - cs
18
+ - cy
19
+ - da
20
+ - de
21
+ - el
22
+ - en
23
+ - es
24
+ - et
25
+ - fa
26
+ - ff
27
+ - fi
28
+ - fr
29
+ - fy
30
+ - ga
31
+ - gd
32
+ - gl
33
+ - gu
34
+ - ha
35
+ - he
36
+ - hi
37
+ - hr
38
+ - ht
39
+ - hu
40
+ - hy
41
+ - id
42
+ - ig
43
+ - ilo
44
+ - is
45
+ - it
46
+ - ja
47
+ - jv
48
+ - ka
49
+ - kk
50
+ - km
51
+ - kn
52
+ - ko
53
+ - lb
54
+ - lg
55
+ - ln
56
+ - lo
57
+ - lt
58
+ - lv
59
+ - mg
60
+ - mk
61
+ - ml
62
+ - mn
63
+ - mr
64
+ - ms
65
+ - my
66
+ - ne
67
+ - nl
68
+ - no
69
+ - ns
70
+ - oc
71
+ - or
72
+ - pa
73
+ - pl
74
+ - ps
75
+ - pt
76
+ - ro
77
+ - ru
78
+ - sd
79
+ - si
80
+ - sk
81
+ - sl
82
+ - so
83
+ - sq
84
+ - sr
85
+ - ss
86
+ - su
87
+ - sv
88
+ - sw
89
+ - ta
90
+ - th
91
+ - tl
92
+ - tn
93
+ - tr
94
+ - uk
95
+ - ur
96
+ - uz
97
+ - vi
98
+ - wo
99
+ - xh
100
+ - yi
101
+ - yo
102
+ - zh
103
+ - zu
104
+ license: mit
105
+ ---
106
+
107
+ # M2M100 1.2B
108
+
109
+ M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
110
+ It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
111
+
112
+ The model that can directly translate between the 9,900 directions of 100 languages.
113
+ To translate into a target language, the target language id is forced as the first generated token.
114
+ To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
115
+
116
+ *Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
117
+
118
+ To install `sentencepiece` run `pip install sentencepiece`
119
+
120
+
121
+ ```python
122
+ from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
123
+
124
+ hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
125
+ chinese_text = "生活就像一盒巧克力。"
126
+
127
+ model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B")
128
+ tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B")
129
+
130
+ # translate Hindi to French
131
+ tokenizer.src_lang = "hi"
132
+ encoded_hi = tokenizer(hi_text, return_tensors="pt")
133
+ generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
134
+ tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
135
+ # => "La vie est comme une boîte de chocolat."
136
+
137
+ # translate Chinese to English
138
+ tokenizer.src_lang = "zh"
139
+ encoded_zh = tokenizer(chinese_text, return_tensors="pt")
140
+ generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
141
+ tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
142
+ # => "Life is like a box of chocolate."
143
+ ```
144
+
145
+
146
+ See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions.
147
+
148
+
149
+ ## Languages covered
150
+ Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
151
+
152
+
153
+ ## BibTeX entry and citation info
154
+ ```
155
+ @misc{fan2020englishcentric,
156
+ title={Beyond English-Centric Multilingual Machine Translation},
157
+ author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin},
158
+ year={2020},
159
+ eprint={2010.11125},
160
+ archivePrefix={arXiv},
161
+ primaryClass={cs.CL}
162
+ }
163
+ ```
config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_source_bos": false,
3
+ "add_source_eos": false,
4
+ "bos_token": "<s>",
5
+ "decoder_start_token": "</s>",
6
+ "eos_token": "</s>",
7
+ "layer_norm_epsilon": null,
8
+ "unk_token": "<unk>"
9
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c97df052a558895317312470e1ff7cb8eae5416f7ae16214a2983c6853dd3ce5
3
+ size 1249655149
shared_vocabulary.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "additional_special_tokens": ["__af__", "__am__", "__ar__", "__ast__", "__az__", "__ba__", "__be__", "__bg__", "__bn__", "__br__", "__bs__", "__ca__", "__ceb__", "__cs__", "__cy__", "__da__", "__de__", "__el__", "__en__", "__es__", "__et__", "__fa__", "__ff__", "__fi__", "__fr__", "__fy__", "__ga__", "__gd__", "__gl__", "__gu__", "__ha__", "__he__", "__hi__", "__hr__", "__ht__", "__hu__", "__hy__", "__id__", "__ig__", "__ilo__", "__is__", "__it__", "__ja__", "__jv__", "__ka__", "__kk__", "__km__", "__kn__", "__ko__", "__lb__", "__lg__", "__ln__", "__lo__", "__lt__", "__lv__", "__mg__", "__mk__", "__ml__", "__mn__", "__mr__", "__ms__", "__my__", "__ne__", "__nl__", "__no__", "__ns__", "__oc__", "__or__", "__pa__", "__pl__", "__ps__", "__pt__", "__ro__", "__ru__", "__sd__", "__si__", "__sk__", "__sl__", "__so__", "__sq__", "__sr__", "__ss__", "__su__", "__sv__", "__sw__", "__ta__", "__th__", "__tl__", "__tn__", "__tr__", "__uk__", "__ur__", "__uz__", "__vi__", "__wo__", "__xh__", "__yi__", "__yo__", "__zh__", "__zu__"]}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"src_lang": null, "tgt_lang": null, "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "special_tokens_map_file": "m2m_100_1.2B_v2/special_tokens_map.json", "tokenizer_file": null, "name_or_path": "m2m_100_1.2B_v2"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff