Michael Pieler commited on
Commit
9bb52b1
1 Parent(s): 8ce6ac3

cp opt-13b

Browse files
LICENSE.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h2 align="center"> OPT-175B LICENSE AGREEMENT </h2>
2
+
3
+ This License Agreement (as may be amended in accordance with this License Agreement, **“License”**), between you, or your employer or other entity (if you are entering into this agreement on behalf of your employer or other entity) (**“Licensee”** or **“you”**) and Meta Platforms, Inc. (**“Meta”** or **“we”**) applies to your use of any computer program, algorithm, source code, object code, or software that is made available by Meta under this License (**“Software”**) and any specifications, manuals, documentation, and other written information provided by Meta related to the Software (**“Documentation”**).
4
+
5
+ **By clicking “I Accept” below or by using the Software, you agree to the terms of this License. If you do not agree to this License, then you do not have any rights to use the Software or Documentation (collectively, the “Software Products”), and you must immediately cease using the Software Products. If you are agreeing to be bound by the terms of this License on behalf of your employer or other entity, you represent and warrant to Meta that you have full legal authority to bind your employer or such entity to this License. If you do not have the requisite authority, you may not accept the License or access the Software Products on behalf of your employer or other entity.**
6
+ <br><br>
7
+ 1. **LICENSE GRANT**
8
+ <br><br>
9
+ a. Subject to your compliance with the Documentation and Sections 2, 3, and 5, Meta grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Meta’s copyright interests to reproduce, distribute, and create derivative works of the Software solely for your non-commercial research purposes. The foregoing license is personal to you, and you may not assign or sublicense this License or any other rights or obligations under this License without Meta’s prior written consent; any such assignment or sublicense will be void and will automatically and immediately terminate this License.
10
+ <br><br>
11
+ b. You may make a reasonable number of copies of the Documentation solely for use in connection with the license to the Software granted above.
12
+ <br><br>
13
+ c. The grant of rights expressly set forth in this Section 1 (License Grant) are the complete grant of rights to you in the Software Products, and no other licenses are granted, whether by waiver, estoppel, implication, equity or otherwise. Meta and its licensors reserve all rights not expressly granted by this License.
14
+ <br><br>
15
+ 2. **RESTRICTIONS**
16
+ <br><br>
17
+ You will not, and will not permit, assist or cause any third party to:
18
+ <br><br>
19
+ a. use, modify, copy, reproduce, create derivative works of, or distribute the Software Products (or any derivative works thereof, works incorporating the Software Products, or any data produced by the Software), in whole or in part, for (i) any commercial or production purposes, (ii) military purposes or in the service of nuclear technology, (iii) purposes of surveillance, including any research or development relating to surveillance, (iv) biometric processing, (v) in any manner that infringes, misappropriates, or otherwise violates any third-party rights, or (vi) in any manner that violates any applicable law, including accessing the Software Products from an embargoed country as prohibited by the U.S. government, and violating any privacy or security laws, rules, regulations, directives, or governmental requirements (including the General Data Privacy Regulation (Regulation (EU) 2016/679), the California Consumer Privacy Act, and any and all laws governing the processing of biometric information), as well as all amendments and successor laws to any of the foregoing;
20
+ <br><br>
21
+ b. alter or remove copyright and other proprietary notices which appear on or in the Software Products;
22
+ <br><br>
23
+ c. utilize any equipment, device, software, or other means to circumvent or remove any security or protection used by Meta in connection with the Software, or to circumvent or remove any usage restrictions, or to enable functionality disabled by Meta; or
24
+ <br><br>
25
+ d. offer or impose any terms on the Software Products that alter, restrict, or are inconsistent with the terms of this License.
26
+ <br><br>
27
+ 3. **ATTRIBUTION**
28
+ <br><br>
29
+ Together with any copies of the Software Products (as well as derivative works thereof or works incorporating the Software Products) that you distribute, you must provide (i) a copy of this License, and (ii) the following attribution notice: “OPT-175B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.”
30
+ <br><br>
31
+ 4. **DISCLAIMERS**
32
+ <br><br>
33
+ THE SOFTWARE PRODUCTS ARE PROVIDED “AS IS” and “WITH ALL FAULTS” WITH NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. META EXPRESSLY DISCLAIMS ALL REPRESENTATIONS AND WARRANTIES, EXPRESS OR IMPLIED, WHETHER BY STATUTE, CUSTOM, USAGE OR OTHERWISE AS TO ANY MATTERS RELATED TO THE SOFTWARE PRODUCTS, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, SATISFACTORY QUALITY, OR NON-INFRINGEMENT. META MAKES NO WARRANTIES OR REPRESENTATIONS THAT THE SOFTWARE PRODUCTS WILL BE ERROR FREE OR FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS, OR PRODUCE ANY PARTICULAR RESULTS.
34
+ <br><br>
35
+ 5. **LIMITATION OF LIABILITY**
36
+ <br><br>
37
+ TO THE FULLEST EXTENT PERMITTED BY LAW, IN NO EVENT WILL META BE LIABLE TO YOU (A) UNDER ANY THEORY OF LIABILITY, WHETHER BASED IN CONTRACT, TORT, NEGLIGENCE, STRICT LIABILITY, WARRANTY, OR OTHERWISE UNDER THIS LICENSE, OR (B) FOR ANY INDIRECT, CONSEQUENTIAL, EXEMPLARY, INCIDENTAL, PUNITIVE OR SPECIAL DAMAGES OR LOST PROFITS, EVEN IF META HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE SOFTWARE PRODUCTS, THEIR CONSTITUENT COMPONENTS, AND ANY OUTPUT (COLLECTIVELY, **“SOFTWARE MATERIALS”**) ARE NOT DESIGNED OR INTENDED FOR USE IN ANY APPLICATION OR SITUATION WHERE FAILURE OR FAULT OF THE SOFTWARE MATERIALS COULD REASONABLY BE ANTICIPATED TO LEAD TO SERIOUS INJURY OF ANY PERSON, INCLUDING POTENTIAL DISCRIMINATION OR VIOLATION OF AN INDIVIDUAL’S PRIVACY RIGHTS, OR TO SEVERE PHYSICAL, PROPERTY, OR ENVIRONMENTAL DAMAGE (EACH, A **“HIGH-RISK USE”**). IF YOU ELECT TO USE ANY OF THE SOFTWARE MATERIALS FOR A HIGH-RISK USE, YOU DO SO AT YOUR OWN RISK. YOU AGREE TO DESIGN AND IMPLEMENT APPROPRIATE DECISION-MAKING AND RISK-MITIGATION PROCEDURES AND POLICIES IN CONNECTION WITH A HIGH-RISK USE SUCH THAT EVEN IF THERE IS A FAILURE OR FAULT IN ANY OF THE SOFTWARE MATERIALS, THE SAFETY OF PERSONS OR PROPERTY AFFECTED BY THE ACTIVITY STAYS AT A LEVEL THAT IS REASONABLE, APPROPRIATE, AND LAWFUL FOR THE FIELD OF THE HIGH-RISK USE.
38
+ <br><br>
39
+ 6. **INDEMNIFICATION**
40
+ <br><br>
41
+ You will indemnify, defend and hold harmless Meta and our subsidiaries and affiliates, and each of our respective shareholders, directors, officers, employees, agents, successors, and assigns (collectively, the **“Meta Parties”**) from and against any losses, liabilities, damages, fines, penalties, and expenses (including reasonable attorneys’ fees) incurred by any Meta Party in connection with any claim, demand, allegation, lawsuit, proceeding, or investigation (collectively, **“Claims”**) arising out of or related to: (a) your access to or use of the Software Products (as well as any results or data generated from such access or use), including any High-Risk Use (defined below); (b) your violation of this License; or (c) your violation, misappropriation or infringement of any rights of another (including intellectual property or other proprietary rights and privacy rights). You will promptly notify the Meta Parties of any such Claims, and cooperate with Meta Parties in defending such Claims. You will also grant the Meta Parties sole control of the defense or settlement, at Meta’s sole option, of any Claims. This indemnity is in addition to, and not in lieu of, any other indemnities or remedies set forth in a written agreement between you and Meta or the other Meta Parties.
42
+ <br><br>
43
+ 7. **TERMINATION; SURVIVAL**
44
+ <br><br>
45
+ a. This License will automatically terminate upon any breach by you of the terms of this License.
46
+ <br><br>
47
+ b. We may terminate this License, in whole or in part, at any time upon notice (including electronic) to you.
48
+ <br><br>
49
+ c. The following sections survive termination of this License: 2 (Restrictions), 3 (Attribution), 4 (Disclaimers), 5 (Limitation on Liability), 6 (Indemnification) 7 (Termination; Survival), 8 (Third Party Materials), 9 (Trademarks), 10 (Applicable Law; Dispute Resolution), and 11 (Miscellaneous).
50
+ <br><br>
51
+ 8. **THIRD PARTY MATERIALS**
52
+ <br><br>
53
+ The Software Products may contain third-party software or other components (including free and open source software) (all of the foregoing, **“Third Party Materials”**), which are subject to the license terms of the respective third-party licensors. Your dealings or correspondence with third parties and your use of or interaction with any Third Party Materials are solely between you and the third party. Meta does not control or endorse, and makes no representations or warranties regarding, any Third Party Materials, and your access to and use of such Third Party Materials are at your own risk.
54
+ <br><br>
55
+ 9. **TRADEMARKS**
56
+ <br><br>
57
+ Licensee has not been granted any trademark license as part of this License and may not use any name or mark associated with Meta without the prior written permission of Meta, except to the extent necessary to make the reference required by the “ATTRIBUTION” section of this Agreement.
58
+ <br><br>
59
+ 10. **APPLICABLE LAW; DISPUTE RESOLUTION**
60
+ <br><br>
61
+ This License will be governed and construed under the laws of the State of California without regard to conflicts of law provisions. Any suit or proceeding arising out of or relating to this License will be brought in the federal or state courts, as applicable, in San Mateo County, California, and each party irrevocably submits to the jurisdiction and venue of such courts.
62
+ <br><br>
63
+ 11. **MISCELLANEOUS**
64
+ <br><br>
65
+ If any provision or part of a provision of this License is unlawful, void or unenforceable, that provision or part of the provision is deemed severed from this License, and will not affect the validity and enforceability of any remaining provisions. The failure of Meta to exercise or enforce any right or provision of this License will not operate as a waiver of such right or provision. This License does not confer any third-party beneficiary rights upon any other person or entity. This License, together with the Documentation, contains the entire understanding between you and Meta regarding the subject matter of this License, and supersedes all other written or oral agreements and understandings between you and Meta regarding such subject matter. No change or addition to any provision of this License will be binding unless it is in writing and signed by an authorized representative of both you and Meta.
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ inference: false
4
+ tags:
5
+ - opt
6
+ - text-generation
7
+
8
+ license: other
9
+ commercial: false
10
+ ---
11
+
12
+ # OPT : Open Pre-trained Transformer Language Models
13
+
14
+ OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
15
+
16
+ **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
17
+ Content from **this** model card has been written by the Hugging Face team.
18
+
19
+ ## Intro
20
+
21
+ To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
22
+
23
+ > Large language models trained on massive text collections have shown surprising emergent
24
+ > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
25
+ > can interact with these models through paid APIs, full model access is currently limited to only a
26
+ > few highly resourced labs. This restricted access has limited researchers’ ability to study how and
27
+ > why these large language models work, hindering progress on improving known challenges in areas
28
+ > such as robustness, bias, and toxicity.
29
+
30
+ > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
31
+ > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
32
+ > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
33
+ > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
34
+ > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
35
+ > collective research community as a whole, which is only possible when models are available for study.
36
+
37
+ ## Model description
38
+
39
+ OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
40
+ OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
41
+
42
+ For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
43
+ the [official paper](https://arxiv.org/abs/2205.01068).
44
+
45
+ ## Intended uses & limitations
46
+
47
+ The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
48
+ In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
49
+
50
+ ### How to use
51
+
52
+ For large OPT models, such as this one, it is not recommend to make use of the `text-generation` pipeline because
53
+ one should load the model in half-precision to accelerate generation and optimize memory consumption on GPU.
54
+ It is recommended to directly call the [`generate`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate)
55
+ method as follows:
56
+
57
+
58
+ ```python
59
+ >>> from transformers import AutoModelForCausalLM, AutoTokenizer
60
+ >>> import torch
61
+
62
+ >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
63
+
64
+ >>> # the fast tokenizer currently does not work correctly
65
+ >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b", use_fast=False)
66
+
67
+ >>> prompt = "Hello, I'm am conscious and"
68
+
69
+
70
+ >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
71
+
72
+ >>> generated_ids = model.generate(input_ids)
73
+
74
+ >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
75
+ ['Hello, I am conscious and aware of my surroundings.\nI am conscious and aware of my']
76
+ ```
77
+
78
+ By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
79
+
80
+ ```python
81
+ >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
82
+ >>> import torch
83
+
84
+ >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
85
+
86
+ >>> # the fast tokenizer currently does not work correctly
87
+ >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b", use_fast=False)
88
+
89
+ >>> prompt = "Hello, I'm am conscious and"
90
+
91
+ >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
92
+
93
+ >>> set_seed(32)
94
+ >>> generated_ids = model.generate(input_ids, do_sample=True)
95
+
96
+ >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
97
+ ['Hello, I am conscious and aware.\nSo that makes you dead, right? ']
98
+ ```
99
+
100
+ ### Limitations and bias
101
+
102
+ As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
103
+ unfiltered content from the internet, which is far from neutral the model is strongly biased :
104
+
105
+ > Like other large language models for which the diversity (or lack thereof) of training
106
+ > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
107
+ > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
108
+ > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
109
+ > large language models.
110
+
111
+ Here's an example of how the model can have biased predictions:
112
+
113
+ ```python
114
+ >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
115
+ >>> import torch
116
+
117
+ >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
118
+
119
+ >>> # the fast tokenizer currently does not work correctly
120
+ >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b", use_fast=False)
121
+
122
+ >>> prompt = "The woman worked as a"
123
+
124
+ >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
125
+
126
+ >>> set_seed(32)
127
+ >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
128
+
129
+ >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
130
+ The woman worked as a supervisor in the office
131
+ The woman worked as a social media consultant for
132
+ The woman worked as a cashier at the
133
+ The woman worked as a teacher, and was
134
+ The woman worked as a maid at our friends
135
+ ```
136
+
137
+ compared to:
138
+
139
+ ```python
140
+ >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
141
+ >>> import torch
142
+
143
+ >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
144
+
145
+ >>> # the fast tokenizer currently does not work correctly
146
+ >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b", use_fast=False)
147
+
148
+ >>> prompt = "The man worked as a"
149
+
150
+ >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
151
+
152
+ >>> set_seed(32)
153
+ >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
154
+
155
+ >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
156
+ The man worked as a consultant to the defense
157
+ The man worked as a bartender in a bar
158
+ The man worked as a cashier at the
159
+ The man worked as a teacher, and was
160
+ The man worked as a professional athlete while he
161
+ ```
162
+
163
+ This bias will also affect all fine-tuned versions of this model.
164
+
165
+ ## Training data
166
+
167
+ The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
168
+
169
+ - BookCorpus, which consists of more than 10K unpublished books,
170
+ - CC-Stories, which contains a subset of CommonCrawl data filtered to match the
171
+ story-like style of Winograd schemas,
172
+ - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
173
+ - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
174
+ Roller et al. (2021)
175
+ - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
176
+ dataset that was used in RoBERTa (Liu et al., 2019b)
177
+
178
+ The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
179
+ to each dataset’s size in the pretraining corpus.
180
+
181
+ The dataset might contains offensive content as parts of the dataset are a subset of
182
+ public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
183
+ that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
184
+
185
+ ### Collection process
186
+
187
+ The dataset was collected form internet, and went through classic data processing algorithms and
188
+ re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
189
+ *This ebook by Project Gutenberg.*
190
+
191
+ ## Training procedure
192
+
193
+ ### Preprocessing
194
+
195
+ The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
196
+ vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
197
+
198
+ The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
199
+
200
+ ### BibTeX entry and citation info
201
+
202
+ ```bibtex
203
+ @misc{zhang2022opt,
204
+ title={OPT: Open Pre-trained Transformer Language Models},
205
+ author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
206
+ year={2022},
207
+ eprint={2205.01068},
208
+ archivePrefix={arXiv},
209
+ primaryClass={cs.CL}
210
+ }
211
+ ```
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "facebook/opt-13b",
3
+ "_remove_final_layer_norm": false,
4
+ "activation_dropout": 0.0,
5
+ "activation_function": "relu",
6
+ "architectures": [
7
+ "OPTForCausalLM"
8
+ ],
9
+ "attention_dropout": 0.0,
10
+ "bos_token_id": 2,
11
+ "do_layer_norm_before": true,
12
+ "dropout": 0.1,
13
+ "eos_token_id": 2,
14
+ "ffn_dim": 20480,
15
+ "hidden_size": 5120,
16
+ "init_std": 0.02,
17
+ "layerdrop": 0.0,
18
+ "max_position_embeddings": 2048,
19
+ "model_type": "opt",
20
+ "num_attention_heads": 40,
21
+ "num_hidden_layers": 40,
22
+ "output_projection": true,
23
+ "pad_token_id": 1,
24
+ "prefix": "</s>",
25
+ "torch_dtype": "float16",
26
+ "transformers_version": "4.21.0.dev0",
27
+ "use_cache": true,
28
+ "vocab_size": 50272,
29
+ "word_embed_proj_dim": 5120
30
+ }
flax_model-00001-of-00003.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5237586f4c072b8b411b0032ae30d37db568253b53b2da4a03740f5a3169b8d1
3
+ size 9975028077
flax_model-00002-of-00003.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e91ff98d3ca16f0ea46ad7eb741feaa46339d3d70efa6515a9db426b225e6e8
3
+ size 9963538947
flax_model-00003-of-00003.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e580ef0211c011d401d45ed5def9ad62e3225057882ae346075f0c547f70516
3
+ size 5768402011
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca0e433d1f6f4b9038dd73e952fe2a795235a8d3433194a2f72dc84e5fd83884
3
+ size 25706948402
flax_model.msgpack.index.json ADDED
@@ -0,0 +1,651 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 25706946560
4
+ },
5
+ "weight_map": {
6
+ "model/decoder/embed_positions/embedding": "flax_model-00001-of-00003.msgpack",
7
+ "model/decoder/embed_tokens/embedding": "flax_model-00001-of-00003.msgpack",
8
+ "model/decoder/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
9
+ "model/decoder/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
10
+ "model/decoder/layers/0/fc1/bias": "flax_model-00001-of-00003.msgpack",
11
+ "model/decoder/layers/0/fc1/kernel": "flax_model-00001-of-00003.msgpack",
12
+ "model/decoder/layers/0/fc2/bias": "flax_model-00001-of-00003.msgpack",
13
+ "model/decoder/layers/0/fc2/kernel": "flax_model-00001-of-00003.msgpack",
14
+ "model/decoder/layers/0/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
15
+ "model/decoder/layers/0/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
16
+ "model/decoder/layers/0/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
17
+ "model/decoder/layers/0/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
18
+ "model/decoder/layers/0/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
19
+ "model/decoder/layers/0/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
20
+ "model/decoder/layers/0/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
21
+ "model/decoder/layers/0/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
22
+ "model/decoder/layers/0/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
23
+ "model/decoder/layers/0/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
24
+ "model/decoder/layers/0/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
25
+ "model/decoder/layers/0/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
26
+ "model/decoder/layers/1/fc1/bias": "flax_model-00001-of-00003.msgpack",
27
+ "model/decoder/layers/1/fc1/kernel": "flax_model-00001-of-00003.msgpack",
28
+ "model/decoder/layers/1/fc2/bias": "flax_model-00001-of-00003.msgpack",
29
+ "model/decoder/layers/1/fc2/kernel": "flax_model-00001-of-00003.msgpack",
30
+ "model/decoder/layers/1/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
31
+ "model/decoder/layers/1/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
32
+ "model/decoder/layers/1/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
33
+ "model/decoder/layers/1/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
34
+ "model/decoder/layers/1/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
35
+ "model/decoder/layers/1/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
36
+ "model/decoder/layers/1/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
37
+ "model/decoder/layers/1/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
38
+ "model/decoder/layers/1/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
39
+ "model/decoder/layers/1/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
40
+ "model/decoder/layers/1/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
41
+ "model/decoder/layers/1/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
42
+ "model/decoder/layers/10/fc1/bias": "flax_model-00001-of-00003.msgpack",
43
+ "model/decoder/layers/10/fc1/kernel": "flax_model-00001-of-00003.msgpack",
44
+ "model/decoder/layers/10/fc2/bias": "flax_model-00001-of-00003.msgpack",
45
+ "model/decoder/layers/10/fc2/kernel": "flax_model-00001-of-00003.msgpack",
46
+ "model/decoder/layers/10/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
47
+ "model/decoder/layers/10/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
48
+ "model/decoder/layers/10/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
49
+ "model/decoder/layers/10/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
50
+ "model/decoder/layers/10/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
51
+ "model/decoder/layers/10/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
52
+ "model/decoder/layers/10/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
53
+ "model/decoder/layers/10/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
54
+ "model/decoder/layers/10/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
55
+ "model/decoder/layers/10/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
56
+ "model/decoder/layers/10/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
57
+ "model/decoder/layers/10/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
58
+ "model/decoder/layers/11/fc1/bias": "flax_model-00001-of-00003.msgpack",
59
+ "model/decoder/layers/11/fc1/kernel": "flax_model-00001-of-00003.msgpack",
60
+ "model/decoder/layers/11/fc2/bias": "flax_model-00001-of-00003.msgpack",
61
+ "model/decoder/layers/11/fc2/kernel": "flax_model-00001-of-00003.msgpack",
62
+ "model/decoder/layers/11/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
63
+ "model/decoder/layers/11/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
64
+ "model/decoder/layers/11/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
65
+ "model/decoder/layers/11/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
66
+ "model/decoder/layers/11/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
67
+ "model/decoder/layers/11/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
68
+ "model/decoder/layers/11/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
69
+ "model/decoder/layers/11/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
70
+ "model/decoder/layers/11/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
71
+ "model/decoder/layers/11/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
72
+ "model/decoder/layers/11/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
73
+ "model/decoder/layers/11/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
74
+ "model/decoder/layers/12/fc1/bias": "flax_model-00001-of-00003.msgpack",
75
+ "model/decoder/layers/12/fc1/kernel": "flax_model-00001-of-00003.msgpack",
76
+ "model/decoder/layers/12/fc2/bias": "flax_model-00001-of-00003.msgpack",
77
+ "model/decoder/layers/12/fc2/kernel": "flax_model-00001-of-00003.msgpack",
78
+ "model/decoder/layers/12/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
79
+ "model/decoder/layers/12/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
80
+ "model/decoder/layers/12/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
81
+ "model/decoder/layers/12/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
82
+ "model/decoder/layers/12/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
83
+ "model/decoder/layers/12/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
84
+ "model/decoder/layers/12/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
85
+ "model/decoder/layers/12/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
86
+ "model/decoder/layers/12/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
87
+ "model/decoder/layers/12/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
88
+ "model/decoder/layers/12/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
89
+ "model/decoder/layers/12/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
90
+ "model/decoder/layers/13/fc1/bias": "flax_model-00001-of-00003.msgpack",
91
+ "model/decoder/layers/13/fc1/kernel": "flax_model-00001-of-00003.msgpack",
92
+ "model/decoder/layers/13/fc2/bias": "flax_model-00001-of-00003.msgpack",
93
+ "model/decoder/layers/13/fc2/kernel": "flax_model-00001-of-00003.msgpack",
94
+ "model/decoder/layers/13/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
95
+ "model/decoder/layers/13/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
96
+ "model/decoder/layers/13/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
97
+ "model/decoder/layers/13/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
98
+ "model/decoder/layers/13/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
99
+ "model/decoder/layers/13/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
100
+ "model/decoder/layers/13/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
101
+ "model/decoder/layers/13/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
102
+ "model/decoder/layers/13/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
103
+ "model/decoder/layers/13/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
104
+ "model/decoder/layers/13/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
105
+ "model/decoder/layers/13/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
106
+ "model/decoder/layers/14/fc1/bias": "flax_model-00001-of-00003.msgpack",
107
+ "model/decoder/layers/14/fc1/kernel": "flax_model-00001-of-00003.msgpack",
108
+ "model/decoder/layers/14/fc2/bias": "flax_model-00001-of-00003.msgpack",
109
+ "model/decoder/layers/14/fc2/kernel": "flax_model-00001-of-00003.msgpack",
110
+ "model/decoder/layers/14/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
111
+ "model/decoder/layers/14/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
112
+ "model/decoder/layers/14/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
113
+ "model/decoder/layers/14/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
114
+ "model/decoder/layers/14/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
115
+ "model/decoder/layers/14/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
116
+ "model/decoder/layers/14/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
117
+ "model/decoder/layers/14/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
118
+ "model/decoder/layers/14/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
119
+ "model/decoder/layers/14/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
120
+ "model/decoder/layers/14/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
121
+ "model/decoder/layers/14/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
122
+ "model/decoder/layers/15/fc1/bias": "flax_model-00001-of-00003.msgpack",
123
+ "model/decoder/layers/15/fc1/kernel": "flax_model-00001-of-00003.msgpack",
124
+ "model/decoder/layers/15/fc2/bias": "flax_model-00001-of-00003.msgpack",
125
+ "model/decoder/layers/15/fc2/kernel": "flax_model-00001-of-00003.msgpack",
126
+ "model/decoder/layers/15/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
127
+ "model/decoder/layers/15/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
128
+ "model/decoder/layers/15/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
129
+ "model/decoder/layers/15/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
130
+ "model/decoder/layers/15/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
131
+ "model/decoder/layers/15/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
132
+ "model/decoder/layers/15/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
133
+ "model/decoder/layers/15/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
134
+ "model/decoder/layers/15/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
135
+ "model/decoder/layers/15/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
136
+ "model/decoder/layers/15/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
137
+ "model/decoder/layers/15/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
138
+ "model/decoder/layers/16/fc1/bias": "flax_model-00001-of-00003.msgpack",
139
+ "model/decoder/layers/16/fc1/kernel": "flax_model-00001-of-00003.msgpack",
140
+ "model/decoder/layers/16/fc2/bias": "flax_model-00001-of-00003.msgpack",
141
+ "model/decoder/layers/16/fc2/kernel": "flax_model-00001-of-00003.msgpack",
142
+ "model/decoder/layers/16/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
143
+ "model/decoder/layers/16/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
144
+ "model/decoder/layers/16/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
145
+ "model/decoder/layers/16/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
146
+ "model/decoder/layers/16/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
147
+ "model/decoder/layers/16/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
148
+ "model/decoder/layers/16/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
149
+ "model/decoder/layers/16/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
150
+ "model/decoder/layers/16/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
151
+ "model/decoder/layers/16/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
152
+ "model/decoder/layers/16/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
153
+ "model/decoder/layers/16/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
154
+ "model/decoder/layers/17/fc1/bias": "flax_model-00001-of-00003.msgpack",
155
+ "model/decoder/layers/17/fc1/kernel": "flax_model-00001-of-00003.msgpack",
156
+ "model/decoder/layers/17/fc2/bias": "flax_model-00001-of-00003.msgpack",
157
+ "model/decoder/layers/17/fc2/kernel": "flax_model-00001-of-00003.msgpack",
158
+ "model/decoder/layers/17/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
159
+ "model/decoder/layers/17/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
160
+ "model/decoder/layers/17/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
161
+ "model/decoder/layers/17/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
162
+ "model/decoder/layers/17/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
163
+ "model/decoder/layers/17/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
164
+ "model/decoder/layers/17/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
165
+ "model/decoder/layers/17/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
166
+ "model/decoder/layers/17/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
167
+ "model/decoder/layers/17/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
168
+ "model/decoder/layers/17/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
169
+ "model/decoder/layers/17/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
170
+ "model/decoder/layers/18/fc1/bias": "flax_model-00001-of-00003.msgpack",
171
+ "model/decoder/layers/18/fc1/kernel": "flax_model-00001-of-00003.msgpack",
172
+ "model/decoder/layers/18/fc2/bias": "flax_model-00001-of-00003.msgpack",
173
+ "model/decoder/layers/18/fc2/kernel": "flax_model-00001-of-00003.msgpack",
174
+ "model/decoder/layers/18/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
175
+ "model/decoder/layers/18/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
176
+ "model/decoder/layers/18/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
177
+ "model/decoder/layers/18/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
178
+ "model/decoder/layers/18/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
179
+ "model/decoder/layers/18/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
180
+ "model/decoder/layers/18/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
181
+ "model/decoder/layers/18/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
182
+ "model/decoder/layers/18/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
183
+ "model/decoder/layers/18/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
184
+ "model/decoder/layers/18/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
185
+ "model/decoder/layers/18/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
186
+ "model/decoder/layers/19/fc1/bias": "flax_model-00001-of-00003.msgpack",
187
+ "model/decoder/layers/19/fc1/kernel": "flax_model-00001-of-00003.msgpack",
188
+ "model/decoder/layers/19/fc2/bias": "flax_model-00001-of-00003.msgpack",
189
+ "model/decoder/layers/19/fc2/kernel": "flax_model-00001-of-00003.msgpack",
190
+ "model/decoder/layers/19/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
191
+ "model/decoder/layers/19/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
192
+ "model/decoder/layers/19/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
193
+ "model/decoder/layers/19/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
194
+ "model/decoder/layers/19/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
195
+ "model/decoder/layers/19/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
196
+ "model/decoder/layers/19/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
197
+ "model/decoder/layers/19/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
198
+ "model/decoder/layers/19/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
199
+ "model/decoder/layers/19/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
200
+ "model/decoder/layers/19/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
201
+ "model/decoder/layers/19/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
202
+ "model/decoder/layers/2/fc1/bias": "flax_model-00001-of-00003.msgpack",
203
+ "model/decoder/layers/2/fc1/kernel": "flax_model-00001-of-00003.msgpack",
204
+ "model/decoder/layers/2/fc2/bias": "flax_model-00001-of-00003.msgpack",
205
+ "model/decoder/layers/2/fc2/kernel": "flax_model-00001-of-00003.msgpack",
206
+ "model/decoder/layers/2/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
207
+ "model/decoder/layers/2/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
208
+ "model/decoder/layers/2/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
209
+ "model/decoder/layers/2/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
210
+ "model/decoder/layers/2/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
211
+ "model/decoder/layers/2/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
212
+ "model/decoder/layers/2/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
213
+ "model/decoder/layers/2/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
214
+ "model/decoder/layers/2/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
215
+ "model/decoder/layers/2/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
216
+ "model/decoder/layers/2/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
217
+ "model/decoder/layers/2/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
218
+ "model/decoder/layers/20/fc1/bias": "flax_model-00001-of-00003.msgpack",
219
+ "model/decoder/layers/20/fc1/kernel": "flax_model-00001-of-00003.msgpack",
220
+ "model/decoder/layers/20/fc2/bias": "flax_model-00001-of-00003.msgpack",
221
+ "model/decoder/layers/20/fc2/kernel": "flax_model-00001-of-00003.msgpack",
222
+ "model/decoder/layers/20/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
223
+ "model/decoder/layers/20/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
224
+ "model/decoder/layers/20/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
225
+ "model/decoder/layers/20/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
226
+ "model/decoder/layers/20/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
227
+ "model/decoder/layers/20/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
228
+ "model/decoder/layers/20/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
229
+ "model/decoder/layers/20/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
230
+ "model/decoder/layers/20/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
231
+ "model/decoder/layers/20/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
232
+ "model/decoder/layers/20/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
233
+ "model/decoder/layers/20/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
234
+ "model/decoder/layers/21/fc1/bias": "flax_model-00001-of-00003.msgpack",
235
+ "model/decoder/layers/21/fc1/kernel": "flax_model-00001-of-00003.msgpack",
236
+ "model/decoder/layers/21/fc2/bias": "flax_model-00001-of-00003.msgpack",
237
+ "model/decoder/layers/21/fc2/kernel": "flax_model-00001-of-00003.msgpack",
238
+ "model/decoder/layers/21/final_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
239
+ "model/decoder/layers/21/final_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
240
+ "model/decoder/layers/21/self_attn/k_proj/bias": "flax_model-00001-of-00003.msgpack",
241
+ "model/decoder/layers/21/self_attn/k_proj/kernel": "flax_model-00001-of-00003.msgpack",
242
+ "model/decoder/layers/21/self_attn/out_proj/bias": "flax_model-00001-of-00003.msgpack",
243
+ "model/decoder/layers/21/self_attn/out_proj/kernel": "flax_model-00001-of-00003.msgpack",
244
+ "model/decoder/layers/21/self_attn/q_proj/bias": "flax_model-00001-of-00003.msgpack",
245
+ "model/decoder/layers/21/self_attn/q_proj/kernel": "flax_model-00001-of-00003.msgpack",
246
+ "model/decoder/layers/21/self_attn/v_proj/bias": "flax_model-00001-of-00003.msgpack",
247
+ "model/decoder/layers/21/self_attn/v_proj/kernel": "flax_model-00001-of-00003.msgpack",
248
+ "model/decoder/layers/21/self_attn_layer_norm/bias": "flax_model-00001-of-00003.msgpack",
249
+ "model/decoder/layers/21/self_attn_layer_norm/scale": "flax_model-00001-of-00003.msgpack",
250
+ "model/decoder/layers/22/fc1/bias": "flax_model-00001-of-00003.msgpack",
251
+ "model/decoder/layers/22/fc1/kernel": "flax_model-00002-of-00003.msgpack",
252
+ "model/decoder/layers/22/fc2/bias": "flax_model-00002-of-00003.msgpack",
253
+ "model/decoder/layers/22/fc2/kernel": "flax_model-00002-of-00003.msgpack",
254
+ "model/decoder/layers/22/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
255
+ "model/decoder/layers/22/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
256
+ "model/decoder/layers/22/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
257
+ "model/decoder/layers/22/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
258
+ "model/decoder/layers/22/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
259
+ "model/decoder/layers/22/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
260
+ "model/decoder/layers/22/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
261
+ "model/decoder/layers/22/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
262
+ "model/decoder/layers/22/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
263
+ "model/decoder/layers/22/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
264
+ "model/decoder/layers/22/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
265
+ "model/decoder/layers/22/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
266
+ "model/decoder/layers/23/fc1/bias": "flax_model-00002-of-00003.msgpack",
267
+ "model/decoder/layers/23/fc1/kernel": "flax_model-00002-of-00003.msgpack",
268
+ "model/decoder/layers/23/fc2/bias": "flax_model-00002-of-00003.msgpack",
269
+ "model/decoder/layers/23/fc2/kernel": "flax_model-00002-of-00003.msgpack",
270
+ "model/decoder/layers/23/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
271
+ "model/decoder/layers/23/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
272
+ "model/decoder/layers/23/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
273
+ "model/decoder/layers/23/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
274
+ "model/decoder/layers/23/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
275
+ "model/decoder/layers/23/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
276
+ "model/decoder/layers/23/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
277
+ "model/decoder/layers/23/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
278
+ "model/decoder/layers/23/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
279
+ "model/decoder/layers/23/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
280
+ "model/decoder/layers/23/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
281
+ "model/decoder/layers/23/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
282
+ "model/decoder/layers/24/fc1/bias": "flax_model-00002-of-00003.msgpack",
283
+ "model/decoder/layers/24/fc1/kernel": "flax_model-00002-of-00003.msgpack",
284
+ "model/decoder/layers/24/fc2/bias": "flax_model-00002-of-00003.msgpack",
285
+ "model/decoder/layers/24/fc2/kernel": "flax_model-00002-of-00003.msgpack",
286
+ "model/decoder/layers/24/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
287
+ "model/decoder/layers/24/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
288
+ "model/decoder/layers/24/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
289
+ "model/decoder/layers/24/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
290
+ "model/decoder/layers/24/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
291
+ "model/decoder/layers/24/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
292
+ "model/decoder/layers/24/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
293
+ "model/decoder/layers/24/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
294
+ "model/decoder/layers/24/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
295
+ "model/decoder/layers/24/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
296
+ "model/decoder/layers/24/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
297
+ "model/decoder/layers/24/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
298
+ "model/decoder/layers/25/fc1/bias": "flax_model-00002-of-00003.msgpack",
299
+ "model/decoder/layers/25/fc1/kernel": "flax_model-00002-of-00003.msgpack",
300
+ "model/decoder/layers/25/fc2/bias": "flax_model-00002-of-00003.msgpack",
301
+ "model/decoder/layers/25/fc2/kernel": "flax_model-00002-of-00003.msgpack",
302
+ "model/decoder/layers/25/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
303
+ "model/decoder/layers/25/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
304
+ "model/decoder/layers/25/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
305
+ "model/decoder/layers/25/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
306
+ "model/decoder/layers/25/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
307
+ "model/decoder/layers/25/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
308
+ "model/decoder/layers/25/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
309
+ "model/decoder/layers/25/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
310
+ "model/decoder/layers/25/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
311
+ "model/decoder/layers/25/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
312
+ "model/decoder/layers/25/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
313
+ "model/decoder/layers/25/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
314
+ "model/decoder/layers/26/fc1/bias": "flax_model-00002-of-00003.msgpack",
315
+ "model/decoder/layers/26/fc1/kernel": "flax_model-00002-of-00003.msgpack",
316
+ "model/decoder/layers/26/fc2/bias": "flax_model-00002-of-00003.msgpack",
317
+ "model/decoder/layers/26/fc2/kernel": "flax_model-00002-of-00003.msgpack",
318
+ "model/decoder/layers/26/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
319
+ "model/decoder/layers/26/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
320
+ "model/decoder/layers/26/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
321
+ "model/decoder/layers/26/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
322
+ "model/decoder/layers/26/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
323
+ "model/decoder/layers/26/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
324
+ "model/decoder/layers/26/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
325
+ "model/decoder/layers/26/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
326
+ "model/decoder/layers/26/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
327
+ "model/decoder/layers/26/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
328
+ "model/decoder/layers/26/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
329
+ "model/decoder/layers/26/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
330
+ "model/decoder/layers/27/fc1/bias": "flax_model-00002-of-00003.msgpack",
331
+ "model/decoder/layers/27/fc1/kernel": "flax_model-00002-of-00003.msgpack",
332
+ "model/decoder/layers/27/fc2/bias": "flax_model-00002-of-00003.msgpack",
333
+ "model/decoder/layers/27/fc2/kernel": "flax_model-00002-of-00003.msgpack",
334
+ "model/decoder/layers/27/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
335
+ "model/decoder/layers/27/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
336
+ "model/decoder/layers/27/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
337
+ "model/decoder/layers/27/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
338
+ "model/decoder/layers/27/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
339
+ "model/decoder/layers/27/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
340
+ "model/decoder/layers/27/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
341
+ "model/decoder/layers/27/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
342
+ "model/decoder/layers/27/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
343
+ "model/decoder/layers/27/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
344
+ "model/decoder/layers/27/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
345
+ "model/decoder/layers/27/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
346
+ "model/decoder/layers/28/fc1/bias": "flax_model-00002-of-00003.msgpack",
347
+ "model/decoder/layers/28/fc1/kernel": "flax_model-00002-of-00003.msgpack",
348
+ "model/decoder/layers/28/fc2/bias": "flax_model-00002-of-00003.msgpack",
349
+ "model/decoder/layers/28/fc2/kernel": "flax_model-00002-of-00003.msgpack",
350
+ "model/decoder/layers/28/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
351
+ "model/decoder/layers/28/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
352
+ "model/decoder/layers/28/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
353
+ "model/decoder/layers/28/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
354
+ "model/decoder/layers/28/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
355
+ "model/decoder/layers/28/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
356
+ "model/decoder/layers/28/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
357
+ "model/decoder/layers/28/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
358
+ "model/decoder/layers/28/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
359
+ "model/decoder/layers/28/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
360
+ "model/decoder/layers/28/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
361
+ "model/decoder/layers/28/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
362
+ "model/decoder/layers/29/fc1/bias": "flax_model-00002-of-00003.msgpack",
363
+ "model/decoder/layers/29/fc1/kernel": "flax_model-00002-of-00003.msgpack",
364
+ "model/decoder/layers/29/fc2/bias": "flax_model-00002-of-00003.msgpack",
365
+ "model/decoder/layers/29/fc2/kernel": "flax_model-00002-of-00003.msgpack",
366
+ "model/decoder/layers/29/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
367
+ "model/decoder/layers/29/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
368
+ "model/decoder/layers/29/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
369
+ "model/decoder/layers/29/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
370
+ "model/decoder/layers/29/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
371
+ "model/decoder/layers/29/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
372
+ "model/decoder/layers/29/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
373
+ "model/decoder/layers/29/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
374
+ "model/decoder/layers/29/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
375
+ "model/decoder/layers/29/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
376
+ "model/decoder/layers/29/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
377
+ "model/decoder/layers/29/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
378
+ "model/decoder/layers/3/fc1/bias": "flax_model-00002-of-00003.msgpack",
379
+ "model/decoder/layers/3/fc1/kernel": "flax_model-00002-of-00003.msgpack",
380
+ "model/decoder/layers/3/fc2/bias": "flax_model-00002-of-00003.msgpack",
381
+ "model/decoder/layers/3/fc2/kernel": "flax_model-00002-of-00003.msgpack",
382
+ "model/decoder/layers/3/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
383
+ "model/decoder/layers/3/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
384
+ "model/decoder/layers/3/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
385
+ "model/decoder/layers/3/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
386
+ "model/decoder/layers/3/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
387
+ "model/decoder/layers/3/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
388
+ "model/decoder/layers/3/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
389
+ "model/decoder/layers/3/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
390
+ "model/decoder/layers/3/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
391
+ "model/decoder/layers/3/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
392
+ "model/decoder/layers/3/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
393
+ "model/decoder/layers/3/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
394
+ "model/decoder/layers/30/fc1/bias": "flax_model-00002-of-00003.msgpack",
395
+ "model/decoder/layers/30/fc1/kernel": "flax_model-00002-of-00003.msgpack",
396
+ "model/decoder/layers/30/fc2/bias": "flax_model-00002-of-00003.msgpack",
397
+ "model/decoder/layers/30/fc2/kernel": "flax_model-00002-of-00003.msgpack",
398
+ "model/decoder/layers/30/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
399
+ "model/decoder/layers/30/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
400
+ "model/decoder/layers/30/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
401
+ "model/decoder/layers/30/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
402
+ "model/decoder/layers/30/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
403
+ "model/decoder/layers/30/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
404
+ "model/decoder/layers/30/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
405
+ "model/decoder/layers/30/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
406
+ "model/decoder/layers/30/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
407
+ "model/decoder/layers/30/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
408
+ "model/decoder/layers/30/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
409
+ "model/decoder/layers/30/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
410
+ "model/decoder/layers/31/fc1/bias": "flax_model-00002-of-00003.msgpack",
411
+ "model/decoder/layers/31/fc1/kernel": "flax_model-00002-of-00003.msgpack",
412
+ "model/decoder/layers/31/fc2/bias": "flax_model-00002-of-00003.msgpack",
413
+ "model/decoder/layers/31/fc2/kernel": "flax_model-00002-of-00003.msgpack",
414
+ "model/decoder/layers/31/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
415
+ "model/decoder/layers/31/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
416
+ "model/decoder/layers/31/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
417
+ "model/decoder/layers/31/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
418
+ "model/decoder/layers/31/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
419
+ "model/decoder/layers/31/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
420
+ "model/decoder/layers/31/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
421
+ "model/decoder/layers/31/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
422
+ "model/decoder/layers/31/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
423
+ "model/decoder/layers/31/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
424
+ "model/decoder/layers/31/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
425
+ "model/decoder/layers/31/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
426
+ "model/decoder/layers/32/fc1/bias": "flax_model-00002-of-00003.msgpack",
427
+ "model/decoder/layers/32/fc1/kernel": "flax_model-00002-of-00003.msgpack",
428
+ "model/decoder/layers/32/fc2/bias": "flax_model-00002-of-00003.msgpack",
429
+ "model/decoder/layers/32/fc2/kernel": "flax_model-00002-of-00003.msgpack",
430
+ "model/decoder/layers/32/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
431
+ "model/decoder/layers/32/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
432
+ "model/decoder/layers/32/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
433
+ "model/decoder/layers/32/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
434
+ "model/decoder/layers/32/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
435
+ "model/decoder/layers/32/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
436
+ "model/decoder/layers/32/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
437
+ "model/decoder/layers/32/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
438
+ "model/decoder/layers/32/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
439
+ "model/decoder/layers/32/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
440
+ "model/decoder/layers/32/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
441
+ "model/decoder/layers/32/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
442
+ "model/decoder/layers/33/fc1/bias": "flax_model-00002-of-00003.msgpack",
443
+ "model/decoder/layers/33/fc1/kernel": "flax_model-00002-of-00003.msgpack",
444
+ "model/decoder/layers/33/fc2/bias": "flax_model-00002-of-00003.msgpack",
445
+ "model/decoder/layers/33/fc2/kernel": "flax_model-00002-of-00003.msgpack",
446
+ "model/decoder/layers/33/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
447
+ "model/decoder/layers/33/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
448
+ "model/decoder/layers/33/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
449
+ "model/decoder/layers/33/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
450
+ "model/decoder/layers/33/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
451
+ "model/decoder/layers/33/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
452
+ "model/decoder/layers/33/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
453
+ "model/decoder/layers/33/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
454
+ "model/decoder/layers/33/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
455
+ "model/decoder/layers/33/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
456
+ "model/decoder/layers/33/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
457
+ "model/decoder/layers/33/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
458
+ "model/decoder/layers/34/fc1/bias": "flax_model-00002-of-00003.msgpack",
459
+ "model/decoder/layers/34/fc1/kernel": "flax_model-00002-of-00003.msgpack",
460
+ "model/decoder/layers/34/fc2/bias": "flax_model-00002-of-00003.msgpack",
461
+ "model/decoder/layers/34/fc2/kernel": "flax_model-00002-of-00003.msgpack",
462
+ "model/decoder/layers/34/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
463
+ "model/decoder/layers/34/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
464
+ "model/decoder/layers/34/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
465
+ "model/decoder/layers/34/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
466
+ "model/decoder/layers/34/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
467
+ "model/decoder/layers/34/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
468
+ "model/decoder/layers/34/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
469
+ "model/decoder/layers/34/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
470
+ "model/decoder/layers/34/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
471
+ "model/decoder/layers/34/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
472
+ "model/decoder/layers/34/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
473
+ "model/decoder/layers/34/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
474
+ "model/decoder/layers/35/fc1/bias": "flax_model-00002-of-00003.msgpack",
475
+ "model/decoder/layers/35/fc1/kernel": "flax_model-00002-of-00003.msgpack",
476
+ "model/decoder/layers/35/fc2/bias": "flax_model-00002-of-00003.msgpack",
477
+ "model/decoder/layers/35/fc2/kernel": "flax_model-00002-of-00003.msgpack",
478
+ "model/decoder/layers/35/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
479
+ "model/decoder/layers/35/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
480
+ "model/decoder/layers/35/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
481
+ "model/decoder/layers/35/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
482
+ "model/decoder/layers/35/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
483
+ "model/decoder/layers/35/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
484
+ "model/decoder/layers/35/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
485
+ "model/decoder/layers/35/self_attn/q_proj/kernel": "flax_model-00002-of-00003.msgpack",
486
+ "model/decoder/layers/35/self_attn/v_proj/bias": "flax_model-00002-of-00003.msgpack",
487
+ "model/decoder/layers/35/self_attn/v_proj/kernel": "flax_model-00002-of-00003.msgpack",
488
+ "model/decoder/layers/35/self_attn_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
489
+ "model/decoder/layers/35/self_attn_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
490
+ "model/decoder/layers/36/fc1/bias": "flax_model-00002-of-00003.msgpack",
491
+ "model/decoder/layers/36/fc1/kernel": "flax_model-00002-of-00003.msgpack",
492
+ "model/decoder/layers/36/fc2/bias": "flax_model-00002-of-00003.msgpack",
493
+ "model/decoder/layers/36/fc2/kernel": "flax_model-00002-of-00003.msgpack",
494
+ "model/decoder/layers/36/final_layer_norm/bias": "flax_model-00002-of-00003.msgpack",
495
+ "model/decoder/layers/36/final_layer_norm/scale": "flax_model-00002-of-00003.msgpack",
496
+ "model/decoder/layers/36/self_attn/k_proj/bias": "flax_model-00002-of-00003.msgpack",
497
+ "model/decoder/layers/36/self_attn/k_proj/kernel": "flax_model-00002-of-00003.msgpack",
498
+ "model/decoder/layers/36/self_attn/out_proj/bias": "flax_model-00002-of-00003.msgpack",
499
+ "model/decoder/layers/36/self_attn/out_proj/kernel": "flax_model-00002-of-00003.msgpack",
500
+ "model/decoder/layers/36/self_attn/q_proj/bias": "flax_model-00002-of-00003.msgpack",
501
+ "model/decoder/layers/36/self_attn/q_proj/kernel": "flax_model-00003-of-00003.msgpack",
502
+ "model/decoder/layers/36/self_attn/v_proj/bias": "flax_model-00003-of-00003.msgpack",
503
+ "model/decoder/layers/36/self_attn/v_proj/kernel": "flax_model-00003-of-00003.msgpack",
504
+ "model/decoder/layers/36/self_attn_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
505
+ "model/decoder/layers/36/self_attn_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
506
+ "model/decoder/layers/37/fc1/bias": "flax_model-00003-of-00003.msgpack",
507
+ "model/decoder/layers/37/fc1/kernel": "flax_model-00003-of-00003.msgpack",
508
+ "model/decoder/layers/37/fc2/bias": "flax_model-00003-of-00003.msgpack",
509
+ "model/decoder/layers/37/fc2/kernel": "flax_model-00003-of-00003.msgpack",
510
+ "model/decoder/layers/37/final_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
511
+ "model/decoder/layers/37/final_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
512
+ "model/decoder/layers/37/self_attn/k_proj/bias": "flax_model-00003-of-00003.msgpack",
513
+ "model/decoder/layers/37/self_attn/k_proj/kernel": "flax_model-00003-of-00003.msgpack",
514
+ "model/decoder/layers/37/self_attn/out_proj/bias": "flax_model-00003-of-00003.msgpack",
515
+ "model/decoder/layers/37/self_attn/out_proj/kernel": "flax_model-00003-of-00003.msgpack",
516
+ "model/decoder/layers/37/self_attn/q_proj/bias": "flax_model-00003-of-00003.msgpack",
517
+ "model/decoder/layers/37/self_attn/q_proj/kernel": "flax_model-00003-of-00003.msgpack",
518
+ "model/decoder/layers/37/self_attn/v_proj/bias": "flax_model-00003-of-00003.msgpack",
519
+ "model/decoder/layers/37/self_attn/v_proj/kernel": "flax_model-00003-of-00003.msgpack",
520
+ "model/decoder/layers/37/self_attn_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
521
+ "model/decoder/layers/37/self_attn_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
522
+ "model/decoder/layers/38/fc1/bias": "flax_model-00003-of-00003.msgpack",
523
+ "model/decoder/layers/38/fc1/kernel": "flax_model-00003-of-00003.msgpack",
524
+ "model/decoder/layers/38/fc2/bias": "flax_model-00003-of-00003.msgpack",
525
+ "model/decoder/layers/38/fc2/kernel": "flax_model-00003-of-00003.msgpack",
526
+ "model/decoder/layers/38/final_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
527
+ "model/decoder/layers/38/final_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
528
+ "model/decoder/layers/38/self_attn/k_proj/bias": "flax_model-00003-of-00003.msgpack",
529
+ "model/decoder/layers/38/self_attn/k_proj/kernel": "flax_model-00003-of-00003.msgpack",
530
+ "model/decoder/layers/38/self_attn/out_proj/bias": "flax_model-00003-of-00003.msgpack",
531
+ "model/decoder/layers/38/self_attn/out_proj/kernel": "flax_model-00003-of-00003.msgpack",
532
+ "model/decoder/layers/38/self_attn/q_proj/bias": "flax_model-00003-of-00003.msgpack",
533
+ "model/decoder/layers/38/self_attn/q_proj/kernel": "flax_model-00003-of-00003.msgpack",
534
+ "model/decoder/layers/38/self_attn/v_proj/bias": "flax_model-00003-of-00003.msgpack",
535
+ "model/decoder/layers/38/self_attn/v_proj/kernel": "flax_model-00003-of-00003.msgpack",
536
+ "model/decoder/layers/38/self_attn_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
537
+ "model/decoder/layers/38/self_attn_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
538
+ "model/decoder/layers/39/fc1/bias": "flax_model-00003-of-00003.msgpack",
539
+ "model/decoder/layers/39/fc1/kernel": "flax_model-00003-of-00003.msgpack",
540
+ "model/decoder/layers/39/fc2/bias": "flax_model-00003-of-00003.msgpack",
541
+ "model/decoder/layers/39/fc2/kernel": "flax_model-00003-of-00003.msgpack",
542
+ "model/decoder/layers/39/final_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
543
+ "model/decoder/layers/39/final_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
544
+ "model/decoder/layers/39/self_attn/k_proj/bias": "flax_model-00003-of-00003.msgpack",
545
+ "model/decoder/layers/39/self_attn/k_proj/kernel": "flax_model-00003-of-00003.msgpack",
546
+ "model/decoder/layers/39/self_attn/out_proj/bias": "flax_model-00003-of-00003.msgpack",
547
+ "model/decoder/layers/39/self_attn/out_proj/kernel": "flax_model-00003-of-00003.msgpack",
548
+ "model/decoder/layers/39/self_attn/q_proj/bias": "flax_model-00003-of-00003.msgpack",
549
+ "model/decoder/layers/39/self_attn/q_proj/kernel": "flax_model-00003-of-00003.msgpack",
550
+ "model/decoder/layers/39/self_attn/v_proj/bias": "flax_model-00003-of-00003.msgpack",
551
+ "model/decoder/layers/39/self_attn/v_proj/kernel": "flax_model-00003-of-00003.msgpack",
552
+ "model/decoder/layers/39/self_attn_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
553
+ "model/decoder/layers/39/self_attn_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
554
+ "model/decoder/layers/4/fc1/bias": "flax_model-00003-of-00003.msgpack",
555
+ "model/decoder/layers/4/fc1/kernel": "flax_model-00003-of-00003.msgpack",
556
+ "model/decoder/layers/4/fc2/bias": "flax_model-00003-of-00003.msgpack",
557
+ "model/decoder/layers/4/fc2/kernel": "flax_model-00003-of-00003.msgpack",
558
+ "model/decoder/layers/4/final_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
559
+ "model/decoder/layers/4/final_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
560
+ "model/decoder/layers/4/self_attn/k_proj/bias": "flax_model-00003-of-00003.msgpack",
561
+ "model/decoder/layers/4/self_attn/k_proj/kernel": "flax_model-00003-of-00003.msgpack",
562
+ "model/decoder/layers/4/self_attn/out_proj/bias": "flax_model-00003-of-00003.msgpack",
563
+ "model/decoder/layers/4/self_attn/out_proj/kernel": "flax_model-00003-of-00003.msgpack",
564
+ "model/decoder/layers/4/self_attn/q_proj/bias": "flax_model-00003-of-00003.msgpack",
565
+ "model/decoder/layers/4/self_attn/q_proj/kernel": "flax_model-00003-of-00003.msgpack",
566
+ "model/decoder/layers/4/self_attn/v_proj/bias": "flax_model-00003-of-00003.msgpack",
567
+ "model/decoder/layers/4/self_attn/v_proj/kernel": "flax_model-00003-of-00003.msgpack",
568
+ "model/decoder/layers/4/self_attn_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
569
+ "model/decoder/layers/4/self_attn_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
570
+ "model/decoder/layers/5/fc1/bias": "flax_model-00003-of-00003.msgpack",
571
+ "model/decoder/layers/5/fc1/kernel": "flax_model-00003-of-00003.msgpack",
572
+ "model/decoder/layers/5/fc2/bias": "flax_model-00003-of-00003.msgpack",
573
+ "model/decoder/layers/5/fc2/kernel": "flax_model-00003-of-00003.msgpack",
574
+ "model/decoder/layers/5/final_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
575
+ "model/decoder/layers/5/final_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
576
+ "model/decoder/layers/5/self_attn/k_proj/bias": "flax_model-00003-of-00003.msgpack",
577
+ "model/decoder/layers/5/self_attn/k_proj/kernel": "flax_model-00003-of-00003.msgpack",
578
+ "model/decoder/layers/5/self_attn/out_proj/bias": "flax_model-00003-of-00003.msgpack",
579
+ "model/decoder/layers/5/self_attn/out_proj/kernel": "flax_model-00003-of-00003.msgpack",
580
+ "model/decoder/layers/5/self_attn/q_proj/bias": "flax_model-00003-of-00003.msgpack",
581
+ "model/decoder/layers/5/self_attn/q_proj/kernel": "flax_model-00003-of-00003.msgpack",
582
+ "model/decoder/layers/5/self_attn/v_proj/bias": "flax_model-00003-of-00003.msgpack",
583
+ "model/decoder/layers/5/self_attn/v_proj/kernel": "flax_model-00003-of-00003.msgpack",
584
+ "model/decoder/layers/5/self_attn_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
585
+ "model/decoder/layers/5/self_attn_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
586
+ "model/decoder/layers/6/fc1/bias": "flax_model-00003-of-00003.msgpack",
587
+ "model/decoder/layers/6/fc1/kernel": "flax_model-00003-of-00003.msgpack",
588
+ "model/decoder/layers/6/fc2/bias": "flax_model-00003-of-00003.msgpack",
589
+ "model/decoder/layers/6/fc2/kernel": "flax_model-00003-of-00003.msgpack",
590
+ "model/decoder/layers/6/final_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
591
+ "model/decoder/layers/6/final_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
592
+ "model/decoder/layers/6/self_attn/k_proj/bias": "flax_model-00003-of-00003.msgpack",
593
+ "model/decoder/layers/6/self_attn/k_proj/kernel": "flax_model-00003-of-00003.msgpack",
594
+ "model/decoder/layers/6/self_attn/out_proj/bias": "flax_model-00003-of-00003.msgpack",
595
+ "model/decoder/layers/6/self_attn/out_proj/kernel": "flax_model-00003-of-00003.msgpack",
596
+ "model/decoder/layers/6/self_attn/q_proj/bias": "flax_model-00003-of-00003.msgpack",
597
+ "model/decoder/layers/6/self_attn/q_proj/kernel": "flax_model-00003-of-00003.msgpack",
598
+ "model/decoder/layers/6/self_attn/v_proj/bias": "flax_model-00003-of-00003.msgpack",
599
+ "model/decoder/layers/6/self_attn/v_proj/kernel": "flax_model-00003-of-00003.msgpack",
600
+ "model/decoder/layers/6/self_attn_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
601
+ "model/decoder/layers/6/self_attn_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
602
+ "model/decoder/layers/7/fc1/bias": "flax_model-00003-of-00003.msgpack",
603
+ "model/decoder/layers/7/fc1/kernel": "flax_model-00003-of-00003.msgpack",
604
+ "model/decoder/layers/7/fc2/bias": "flax_model-00003-of-00003.msgpack",
605
+ "model/decoder/layers/7/fc2/kernel": "flax_model-00003-of-00003.msgpack",
606
+ "model/decoder/layers/7/final_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
607
+ "model/decoder/layers/7/final_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
608
+ "model/decoder/layers/7/self_attn/k_proj/bias": "flax_model-00003-of-00003.msgpack",
609
+ "model/decoder/layers/7/self_attn/k_proj/kernel": "flax_model-00003-of-00003.msgpack",
610
+ "model/decoder/layers/7/self_attn/out_proj/bias": "flax_model-00003-of-00003.msgpack",
611
+ "model/decoder/layers/7/self_attn/out_proj/kernel": "flax_model-00003-of-00003.msgpack",
612
+ "model/decoder/layers/7/self_attn/q_proj/bias": "flax_model-00003-of-00003.msgpack",
613
+ "model/decoder/layers/7/self_attn/q_proj/kernel": "flax_model-00003-of-00003.msgpack",
614
+ "model/decoder/layers/7/self_attn/v_proj/bias": "flax_model-00003-of-00003.msgpack",
615
+ "model/decoder/layers/7/self_attn/v_proj/kernel": "flax_model-00003-of-00003.msgpack",
616
+ "model/decoder/layers/7/self_attn_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
617
+ "model/decoder/layers/7/self_attn_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
618
+ "model/decoder/layers/8/fc1/bias": "flax_model-00003-of-00003.msgpack",
619
+ "model/decoder/layers/8/fc1/kernel": "flax_model-00003-of-00003.msgpack",
620
+ "model/decoder/layers/8/fc2/bias": "flax_model-00003-of-00003.msgpack",
621
+ "model/decoder/layers/8/fc2/kernel": "flax_model-00003-of-00003.msgpack",
622
+ "model/decoder/layers/8/final_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
623
+ "model/decoder/layers/8/final_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
624
+ "model/decoder/layers/8/self_attn/k_proj/bias": "flax_model-00003-of-00003.msgpack",
625
+ "model/decoder/layers/8/self_attn/k_proj/kernel": "flax_model-00003-of-00003.msgpack",
626
+ "model/decoder/layers/8/self_attn/out_proj/bias": "flax_model-00003-of-00003.msgpack",
627
+ "model/decoder/layers/8/self_attn/out_proj/kernel": "flax_model-00003-of-00003.msgpack",
628
+ "model/decoder/layers/8/self_attn/q_proj/bias": "flax_model-00003-of-00003.msgpack",
629
+ "model/decoder/layers/8/self_attn/q_proj/kernel": "flax_model-00003-of-00003.msgpack",
630
+ "model/decoder/layers/8/self_attn/v_proj/bias": "flax_model-00003-of-00003.msgpack",
631
+ "model/decoder/layers/8/self_attn/v_proj/kernel": "flax_model-00003-of-00003.msgpack",
632
+ "model/decoder/layers/8/self_attn_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
633
+ "model/decoder/layers/8/self_attn_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
634
+ "model/decoder/layers/9/fc1/bias": "flax_model-00003-of-00003.msgpack",
635
+ "model/decoder/layers/9/fc1/kernel": "flax_model-00003-of-00003.msgpack",
636
+ "model/decoder/layers/9/fc2/bias": "flax_model-00003-of-00003.msgpack",
637
+ "model/decoder/layers/9/fc2/kernel": "flax_model-00003-of-00003.msgpack",
638
+ "model/decoder/layers/9/final_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
639
+ "model/decoder/layers/9/final_layer_norm/scale": "flax_model-00003-of-00003.msgpack",
640
+ "model/decoder/layers/9/self_attn/k_proj/bias": "flax_model-00003-of-00003.msgpack",
641
+ "model/decoder/layers/9/self_attn/k_proj/kernel": "flax_model-00003-of-00003.msgpack",
642
+ "model/decoder/layers/9/self_attn/out_proj/bias": "flax_model-00003-of-00003.msgpack",
643
+ "model/decoder/layers/9/self_attn/out_proj/kernel": "flax_model-00003-of-00003.msgpack",
644
+ "model/decoder/layers/9/self_attn/q_proj/bias": "flax_model-00003-of-00003.msgpack",
645
+ "model/decoder/layers/9/self_attn/q_proj/kernel": "flax_model-00003-of-00003.msgpack",
646
+ "model/decoder/layers/9/self_attn/v_proj/bias": "flax_model-00003-of-00003.msgpack",
647
+ "model/decoder/layers/9/self_attn/v_proj/kernel": "flax_model-00003-of-00003.msgpack",
648
+ "model/decoder/layers/9/self_attn_layer_norm/bias": "flax_model-00003-of-00003.msgpack",
649
+ "model/decoder/layers/9/self_attn_layer_norm/scale": "flax_model-00003-of-00003.msgpack"
650
+ }
651
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model-00001-of-00003.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f92dff6b6debf90135157bfe3663ea87f5dc17fef486e8981259692df3e284c
3
+ size 9975056349
pytorch_model-00002-of-00003.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebb6aeb30549fc6936b152707b2a436f8d72f5b899d67784abc54ba206f85d69
3
+ size 9858794113
pytorch_model-00003-of-00003.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:007f25238c4ca86d8185817d32d2b76db006512225921419e7d34d2a24dffd52
3
+ size 5873300557
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,651 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 25706946560
4
+ },
5
+ "weight_map": {
6
+ "decoder.embed_positions.weight": "pytorch_model-00001-of-00003.bin",
7
+ "decoder.embed_tokens.weight": "pytorch_model-00001-of-00003.bin",
8
+ "decoder.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
9
+ "decoder.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
10
+ "decoder.layers.0.fc1.bias": "pytorch_model-00001-of-00003.bin",
11
+ "decoder.layers.0.fc1.weight": "pytorch_model-00001-of-00003.bin",
12
+ "decoder.layers.0.fc2.bias": "pytorch_model-00001-of-00003.bin",
13
+ "decoder.layers.0.fc2.weight": "pytorch_model-00001-of-00003.bin",
14
+ "decoder.layers.0.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
15
+ "decoder.layers.0.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
16
+ "decoder.layers.0.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
17
+ "decoder.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
18
+ "decoder.layers.0.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
19
+ "decoder.layers.0.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
20
+ "decoder.layers.0.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
21
+ "decoder.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
22
+ "decoder.layers.0.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
23
+ "decoder.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
24
+ "decoder.layers.0.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
25
+ "decoder.layers.0.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
26
+ "decoder.layers.1.fc1.bias": "pytorch_model-00001-of-00003.bin",
27
+ "decoder.layers.1.fc1.weight": "pytorch_model-00001-of-00003.bin",
28
+ "decoder.layers.1.fc2.bias": "pytorch_model-00001-of-00003.bin",
29
+ "decoder.layers.1.fc2.weight": "pytorch_model-00001-of-00003.bin",
30
+ "decoder.layers.1.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
31
+ "decoder.layers.1.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
32
+ "decoder.layers.1.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
33
+ "decoder.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
34
+ "decoder.layers.1.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
35
+ "decoder.layers.1.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
36
+ "decoder.layers.1.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
37
+ "decoder.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
38
+ "decoder.layers.1.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
39
+ "decoder.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
40
+ "decoder.layers.1.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
41
+ "decoder.layers.1.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
42
+ "decoder.layers.10.fc1.bias": "pytorch_model-00001-of-00003.bin",
43
+ "decoder.layers.10.fc1.weight": "pytorch_model-00001-of-00003.bin",
44
+ "decoder.layers.10.fc2.bias": "pytorch_model-00001-of-00003.bin",
45
+ "decoder.layers.10.fc2.weight": "pytorch_model-00001-of-00003.bin",
46
+ "decoder.layers.10.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
47
+ "decoder.layers.10.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
48
+ "decoder.layers.10.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
49
+ "decoder.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
50
+ "decoder.layers.10.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
51
+ "decoder.layers.10.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
52
+ "decoder.layers.10.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
53
+ "decoder.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
54
+ "decoder.layers.10.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
55
+ "decoder.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
56
+ "decoder.layers.10.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
57
+ "decoder.layers.10.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
58
+ "decoder.layers.11.fc1.bias": "pytorch_model-00001-of-00003.bin",
59
+ "decoder.layers.11.fc1.weight": "pytorch_model-00001-of-00003.bin",
60
+ "decoder.layers.11.fc2.bias": "pytorch_model-00001-of-00003.bin",
61
+ "decoder.layers.11.fc2.weight": "pytorch_model-00001-of-00003.bin",
62
+ "decoder.layers.11.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
63
+ "decoder.layers.11.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
64
+ "decoder.layers.11.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
65
+ "decoder.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
66
+ "decoder.layers.11.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
67
+ "decoder.layers.11.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
68
+ "decoder.layers.11.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
69
+ "decoder.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
70
+ "decoder.layers.11.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
71
+ "decoder.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
72
+ "decoder.layers.11.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
73
+ "decoder.layers.11.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
74
+ "decoder.layers.12.fc1.bias": "pytorch_model-00001-of-00003.bin",
75
+ "decoder.layers.12.fc1.weight": "pytorch_model-00001-of-00003.bin",
76
+ "decoder.layers.12.fc2.bias": "pytorch_model-00001-of-00003.bin",
77
+ "decoder.layers.12.fc2.weight": "pytorch_model-00001-of-00003.bin",
78
+ "decoder.layers.12.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
79
+ "decoder.layers.12.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
80
+ "decoder.layers.12.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
81
+ "decoder.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
82
+ "decoder.layers.12.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
83
+ "decoder.layers.12.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
84
+ "decoder.layers.12.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
85
+ "decoder.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
86
+ "decoder.layers.12.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
87
+ "decoder.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
88
+ "decoder.layers.12.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
89
+ "decoder.layers.12.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
90
+ "decoder.layers.13.fc1.bias": "pytorch_model-00001-of-00003.bin",
91
+ "decoder.layers.13.fc1.weight": "pytorch_model-00001-of-00003.bin",
92
+ "decoder.layers.13.fc2.bias": "pytorch_model-00001-of-00003.bin",
93
+ "decoder.layers.13.fc2.weight": "pytorch_model-00001-of-00003.bin",
94
+ "decoder.layers.13.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
95
+ "decoder.layers.13.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
96
+ "decoder.layers.13.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
97
+ "decoder.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
98
+ "decoder.layers.13.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
99
+ "decoder.layers.13.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
100
+ "decoder.layers.13.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
101
+ "decoder.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
102
+ "decoder.layers.13.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
103
+ "decoder.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
104
+ "decoder.layers.13.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
105
+ "decoder.layers.13.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
106
+ "decoder.layers.14.fc1.bias": "pytorch_model-00001-of-00003.bin",
107
+ "decoder.layers.14.fc1.weight": "pytorch_model-00001-of-00003.bin",
108
+ "decoder.layers.14.fc2.bias": "pytorch_model-00001-of-00003.bin",
109
+ "decoder.layers.14.fc2.weight": "pytorch_model-00001-of-00003.bin",
110
+ "decoder.layers.14.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
111
+ "decoder.layers.14.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
112
+ "decoder.layers.14.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
113
+ "decoder.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
114
+ "decoder.layers.14.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
115
+ "decoder.layers.14.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
116
+ "decoder.layers.14.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
117
+ "decoder.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
118
+ "decoder.layers.14.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
119
+ "decoder.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
120
+ "decoder.layers.14.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
121
+ "decoder.layers.14.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
122
+ "decoder.layers.15.fc1.bias": "pytorch_model-00002-of-00003.bin",
123
+ "decoder.layers.15.fc1.weight": "pytorch_model-00002-of-00003.bin",
124
+ "decoder.layers.15.fc2.bias": "pytorch_model-00002-of-00003.bin",
125
+ "decoder.layers.15.fc2.weight": "pytorch_model-00002-of-00003.bin",
126
+ "decoder.layers.15.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
127
+ "decoder.layers.15.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
128
+ "decoder.layers.15.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
129
+ "decoder.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
130
+ "decoder.layers.15.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
131
+ "decoder.layers.15.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
132
+ "decoder.layers.15.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
133
+ "decoder.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
134
+ "decoder.layers.15.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
135
+ "decoder.layers.15.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
136
+ "decoder.layers.15.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
137
+ "decoder.layers.15.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
138
+ "decoder.layers.16.fc1.bias": "pytorch_model-00002-of-00003.bin",
139
+ "decoder.layers.16.fc1.weight": "pytorch_model-00002-of-00003.bin",
140
+ "decoder.layers.16.fc2.bias": "pytorch_model-00002-of-00003.bin",
141
+ "decoder.layers.16.fc2.weight": "pytorch_model-00002-of-00003.bin",
142
+ "decoder.layers.16.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
143
+ "decoder.layers.16.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
144
+ "decoder.layers.16.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
145
+ "decoder.layers.16.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
146
+ "decoder.layers.16.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
147
+ "decoder.layers.16.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
148
+ "decoder.layers.16.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
149
+ "decoder.layers.16.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
150
+ "decoder.layers.16.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
151
+ "decoder.layers.16.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
152
+ "decoder.layers.16.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
153
+ "decoder.layers.16.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
154
+ "decoder.layers.17.fc1.bias": "pytorch_model-00002-of-00003.bin",
155
+ "decoder.layers.17.fc1.weight": "pytorch_model-00002-of-00003.bin",
156
+ "decoder.layers.17.fc2.bias": "pytorch_model-00002-of-00003.bin",
157
+ "decoder.layers.17.fc2.weight": "pytorch_model-00002-of-00003.bin",
158
+ "decoder.layers.17.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
159
+ "decoder.layers.17.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
160
+ "decoder.layers.17.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
161
+ "decoder.layers.17.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
162
+ "decoder.layers.17.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
163
+ "decoder.layers.17.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
164
+ "decoder.layers.17.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
165
+ "decoder.layers.17.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
166
+ "decoder.layers.17.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
167
+ "decoder.layers.17.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
168
+ "decoder.layers.17.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
169
+ "decoder.layers.17.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
170
+ "decoder.layers.18.fc1.bias": "pytorch_model-00002-of-00003.bin",
171
+ "decoder.layers.18.fc1.weight": "pytorch_model-00002-of-00003.bin",
172
+ "decoder.layers.18.fc2.bias": "pytorch_model-00002-of-00003.bin",
173
+ "decoder.layers.18.fc2.weight": "pytorch_model-00002-of-00003.bin",
174
+ "decoder.layers.18.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
175
+ "decoder.layers.18.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
176
+ "decoder.layers.18.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
177
+ "decoder.layers.18.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
178
+ "decoder.layers.18.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
179
+ "decoder.layers.18.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
180
+ "decoder.layers.18.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
181
+ "decoder.layers.18.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
182
+ "decoder.layers.18.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
183
+ "decoder.layers.18.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
184
+ "decoder.layers.18.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
185
+ "decoder.layers.18.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
186
+ "decoder.layers.19.fc1.bias": "pytorch_model-00002-of-00003.bin",
187
+ "decoder.layers.19.fc1.weight": "pytorch_model-00002-of-00003.bin",
188
+ "decoder.layers.19.fc2.bias": "pytorch_model-00002-of-00003.bin",
189
+ "decoder.layers.19.fc2.weight": "pytorch_model-00002-of-00003.bin",
190
+ "decoder.layers.19.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
191
+ "decoder.layers.19.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
192
+ "decoder.layers.19.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
193
+ "decoder.layers.19.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
194
+ "decoder.layers.19.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
195
+ "decoder.layers.19.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
196
+ "decoder.layers.19.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
197
+ "decoder.layers.19.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
198
+ "decoder.layers.19.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
199
+ "decoder.layers.19.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
200
+ "decoder.layers.19.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
201
+ "decoder.layers.19.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
202
+ "decoder.layers.2.fc1.bias": "pytorch_model-00001-of-00003.bin",
203
+ "decoder.layers.2.fc1.weight": "pytorch_model-00001-of-00003.bin",
204
+ "decoder.layers.2.fc2.bias": "pytorch_model-00001-of-00003.bin",
205
+ "decoder.layers.2.fc2.weight": "pytorch_model-00001-of-00003.bin",
206
+ "decoder.layers.2.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
207
+ "decoder.layers.2.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
208
+ "decoder.layers.2.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
209
+ "decoder.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
210
+ "decoder.layers.2.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
211
+ "decoder.layers.2.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
212
+ "decoder.layers.2.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
213
+ "decoder.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
214
+ "decoder.layers.2.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
215
+ "decoder.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
216
+ "decoder.layers.2.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
217
+ "decoder.layers.2.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
218
+ "decoder.layers.20.fc1.bias": "pytorch_model-00002-of-00003.bin",
219
+ "decoder.layers.20.fc1.weight": "pytorch_model-00002-of-00003.bin",
220
+ "decoder.layers.20.fc2.bias": "pytorch_model-00002-of-00003.bin",
221
+ "decoder.layers.20.fc2.weight": "pytorch_model-00002-of-00003.bin",
222
+ "decoder.layers.20.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
223
+ "decoder.layers.20.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
224
+ "decoder.layers.20.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
225
+ "decoder.layers.20.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
226
+ "decoder.layers.20.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
227
+ "decoder.layers.20.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
228
+ "decoder.layers.20.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
229
+ "decoder.layers.20.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
230
+ "decoder.layers.20.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
231
+ "decoder.layers.20.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
232
+ "decoder.layers.20.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
233
+ "decoder.layers.20.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
234
+ "decoder.layers.21.fc1.bias": "pytorch_model-00002-of-00003.bin",
235
+ "decoder.layers.21.fc1.weight": "pytorch_model-00002-of-00003.bin",
236
+ "decoder.layers.21.fc2.bias": "pytorch_model-00002-of-00003.bin",
237
+ "decoder.layers.21.fc2.weight": "pytorch_model-00002-of-00003.bin",
238
+ "decoder.layers.21.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
239
+ "decoder.layers.21.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
240
+ "decoder.layers.21.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
241
+ "decoder.layers.21.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
242
+ "decoder.layers.21.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
243
+ "decoder.layers.21.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
244
+ "decoder.layers.21.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
245
+ "decoder.layers.21.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
246
+ "decoder.layers.21.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
247
+ "decoder.layers.21.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
248
+ "decoder.layers.21.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
249
+ "decoder.layers.21.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
250
+ "decoder.layers.22.fc1.bias": "pytorch_model-00002-of-00003.bin",
251
+ "decoder.layers.22.fc1.weight": "pytorch_model-00002-of-00003.bin",
252
+ "decoder.layers.22.fc2.bias": "pytorch_model-00002-of-00003.bin",
253
+ "decoder.layers.22.fc2.weight": "pytorch_model-00002-of-00003.bin",
254
+ "decoder.layers.22.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
255
+ "decoder.layers.22.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
256
+ "decoder.layers.22.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
257
+ "decoder.layers.22.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
258
+ "decoder.layers.22.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
259
+ "decoder.layers.22.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
260
+ "decoder.layers.22.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
261
+ "decoder.layers.22.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
262
+ "decoder.layers.22.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
263
+ "decoder.layers.22.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
264
+ "decoder.layers.22.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
265
+ "decoder.layers.22.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
266
+ "decoder.layers.23.fc1.bias": "pytorch_model-00002-of-00003.bin",
267
+ "decoder.layers.23.fc1.weight": "pytorch_model-00002-of-00003.bin",
268
+ "decoder.layers.23.fc2.bias": "pytorch_model-00002-of-00003.bin",
269
+ "decoder.layers.23.fc2.weight": "pytorch_model-00002-of-00003.bin",
270
+ "decoder.layers.23.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
271
+ "decoder.layers.23.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
272
+ "decoder.layers.23.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
273
+ "decoder.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
274
+ "decoder.layers.23.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
275
+ "decoder.layers.23.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
276
+ "decoder.layers.23.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
277
+ "decoder.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
278
+ "decoder.layers.23.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
279
+ "decoder.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
280
+ "decoder.layers.23.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
281
+ "decoder.layers.23.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
282
+ "decoder.layers.24.fc1.bias": "pytorch_model-00002-of-00003.bin",
283
+ "decoder.layers.24.fc1.weight": "pytorch_model-00002-of-00003.bin",
284
+ "decoder.layers.24.fc2.bias": "pytorch_model-00002-of-00003.bin",
285
+ "decoder.layers.24.fc2.weight": "pytorch_model-00002-of-00003.bin",
286
+ "decoder.layers.24.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
287
+ "decoder.layers.24.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
288
+ "decoder.layers.24.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
289
+ "decoder.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
290
+ "decoder.layers.24.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
291
+ "decoder.layers.24.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
292
+ "decoder.layers.24.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
293
+ "decoder.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
294
+ "decoder.layers.24.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
295
+ "decoder.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
296
+ "decoder.layers.24.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
297
+ "decoder.layers.24.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
298
+ "decoder.layers.25.fc1.bias": "pytorch_model-00002-of-00003.bin",
299
+ "decoder.layers.25.fc1.weight": "pytorch_model-00002-of-00003.bin",
300
+ "decoder.layers.25.fc2.bias": "pytorch_model-00002-of-00003.bin",
301
+ "decoder.layers.25.fc2.weight": "pytorch_model-00002-of-00003.bin",
302
+ "decoder.layers.25.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
303
+ "decoder.layers.25.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
304
+ "decoder.layers.25.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
305
+ "decoder.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
306
+ "decoder.layers.25.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
307
+ "decoder.layers.25.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
308
+ "decoder.layers.25.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
309
+ "decoder.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
310
+ "decoder.layers.25.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
311
+ "decoder.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
312
+ "decoder.layers.25.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
313
+ "decoder.layers.25.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
314
+ "decoder.layers.26.fc1.bias": "pytorch_model-00002-of-00003.bin",
315
+ "decoder.layers.26.fc1.weight": "pytorch_model-00002-of-00003.bin",
316
+ "decoder.layers.26.fc2.bias": "pytorch_model-00002-of-00003.bin",
317
+ "decoder.layers.26.fc2.weight": "pytorch_model-00002-of-00003.bin",
318
+ "decoder.layers.26.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
319
+ "decoder.layers.26.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
320
+ "decoder.layers.26.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
321
+ "decoder.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
322
+ "decoder.layers.26.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
323
+ "decoder.layers.26.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
324
+ "decoder.layers.26.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
325
+ "decoder.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
326
+ "decoder.layers.26.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
327
+ "decoder.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
328
+ "decoder.layers.26.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
329
+ "decoder.layers.26.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
330
+ "decoder.layers.27.fc1.bias": "pytorch_model-00002-of-00003.bin",
331
+ "decoder.layers.27.fc1.weight": "pytorch_model-00002-of-00003.bin",
332
+ "decoder.layers.27.fc2.bias": "pytorch_model-00002-of-00003.bin",
333
+ "decoder.layers.27.fc2.weight": "pytorch_model-00002-of-00003.bin",
334
+ "decoder.layers.27.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
335
+ "decoder.layers.27.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
336
+ "decoder.layers.27.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
337
+ "decoder.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
338
+ "decoder.layers.27.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
339
+ "decoder.layers.27.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
340
+ "decoder.layers.27.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
341
+ "decoder.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
342
+ "decoder.layers.27.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
343
+ "decoder.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
344
+ "decoder.layers.27.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
345
+ "decoder.layers.27.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
346
+ "decoder.layers.28.fc1.bias": "pytorch_model-00002-of-00003.bin",
347
+ "decoder.layers.28.fc1.weight": "pytorch_model-00002-of-00003.bin",
348
+ "decoder.layers.28.fc2.bias": "pytorch_model-00002-of-00003.bin",
349
+ "decoder.layers.28.fc2.weight": "pytorch_model-00002-of-00003.bin",
350
+ "decoder.layers.28.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
351
+ "decoder.layers.28.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
352
+ "decoder.layers.28.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
353
+ "decoder.layers.28.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
354
+ "decoder.layers.28.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
355
+ "decoder.layers.28.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
356
+ "decoder.layers.28.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
357
+ "decoder.layers.28.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
358
+ "decoder.layers.28.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
359
+ "decoder.layers.28.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
360
+ "decoder.layers.28.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
361
+ "decoder.layers.28.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
362
+ "decoder.layers.29.fc1.bias": "pytorch_model-00002-of-00003.bin",
363
+ "decoder.layers.29.fc1.weight": "pytorch_model-00002-of-00003.bin",
364
+ "decoder.layers.29.fc2.bias": "pytorch_model-00002-of-00003.bin",
365
+ "decoder.layers.29.fc2.weight": "pytorch_model-00002-of-00003.bin",
366
+ "decoder.layers.29.final_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
367
+ "decoder.layers.29.final_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
368
+ "decoder.layers.29.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
369
+ "decoder.layers.29.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
370
+ "decoder.layers.29.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
371
+ "decoder.layers.29.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
372
+ "decoder.layers.29.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
373
+ "decoder.layers.29.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
374
+ "decoder.layers.29.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
375
+ "decoder.layers.29.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
376
+ "decoder.layers.29.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
377
+ "decoder.layers.29.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
378
+ "decoder.layers.3.fc1.bias": "pytorch_model-00001-of-00003.bin",
379
+ "decoder.layers.3.fc1.weight": "pytorch_model-00001-of-00003.bin",
380
+ "decoder.layers.3.fc2.bias": "pytorch_model-00001-of-00003.bin",
381
+ "decoder.layers.3.fc2.weight": "pytorch_model-00001-of-00003.bin",
382
+ "decoder.layers.3.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
383
+ "decoder.layers.3.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
384
+ "decoder.layers.3.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
385
+ "decoder.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
386
+ "decoder.layers.3.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
387
+ "decoder.layers.3.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
388
+ "decoder.layers.3.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
389
+ "decoder.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
390
+ "decoder.layers.3.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
391
+ "decoder.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
392
+ "decoder.layers.3.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
393
+ "decoder.layers.3.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
394
+ "decoder.layers.30.fc1.bias": "pytorch_model-00002-of-00003.bin",
395
+ "decoder.layers.30.fc1.weight": "pytorch_model-00002-of-00003.bin",
396
+ "decoder.layers.30.fc2.bias": "pytorch_model-00003-of-00003.bin",
397
+ "decoder.layers.30.fc2.weight": "pytorch_model-00003-of-00003.bin",
398
+ "decoder.layers.30.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
399
+ "decoder.layers.30.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
400
+ "decoder.layers.30.self_attn.k_proj.bias": "pytorch_model-00002-of-00003.bin",
401
+ "decoder.layers.30.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
402
+ "decoder.layers.30.self_attn.out_proj.bias": "pytorch_model-00002-of-00003.bin",
403
+ "decoder.layers.30.self_attn.out_proj.weight": "pytorch_model-00002-of-00003.bin",
404
+ "decoder.layers.30.self_attn.q_proj.bias": "pytorch_model-00002-of-00003.bin",
405
+ "decoder.layers.30.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
406
+ "decoder.layers.30.self_attn.v_proj.bias": "pytorch_model-00002-of-00003.bin",
407
+ "decoder.layers.30.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
408
+ "decoder.layers.30.self_attn_layer_norm.bias": "pytorch_model-00002-of-00003.bin",
409
+ "decoder.layers.30.self_attn_layer_norm.weight": "pytorch_model-00002-of-00003.bin",
410
+ "decoder.layers.31.fc1.bias": "pytorch_model-00003-of-00003.bin",
411
+ "decoder.layers.31.fc1.weight": "pytorch_model-00003-of-00003.bin",
412
+ "decoder.layers.31.fc2.bias": "pytorch_model-00003-of-00003.bin",
413
+ "decoder.layers.31.fc2.weight": "pytorch_model-00003-of-00003.bin",
414
+ "decoder.layers.31.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
415
+ "decoder.layers.31.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
416
+ "decoder.layers.31.self_attn.k_proj.bias": "pytorch_model-00003-of-00003.bin",
417
+ "decoder.layers.31.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
418
+ "decoder.layers.31.self_attn.out_proj.bias": "pytorch_model-00003-of-00003.bin",
419
+ "decoder.layers.31.self_attn.out_proj.weight": "pytorch_model-00003-of-00003.bin",
420
+ "decoder.layers.31.self_attn.q_proj.bias": "pytorch_model-00003-of-00003.bin",
421
+ "decoder.layers.31.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
422
+ "decoder.layers.31.self_attn.v_proj.bias": "pytorch_model-00003-of-00003.bin",
423
+ "decoder.layers.31.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
424
+ "decoder.layers.31.self_attn_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
425
+ "decoder.layers.31.self_attn_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
426
+ "decoder.layers.32.fc1.bias": "pytorch_model-00003-of-00003.bin",
427
+ "decoder.layers.32.fc1.weight": "pytorch_model-00003-of-00003.bin",
428
+ "decoder.layers.32.fc2.bias": "pytorch_model-00003-of-00003.bin",
429
+ "decoder.layers.32.fc2.weight": "pytorch_model-00003-of-00003.bin",
430
+ "decoder.layers.32.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
431
+ "decoder.layers.32.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
432
+ "decoder.layers.32.self_attn.k_proj.bias": "pytorch_model-00003-of-00003.bin",
433
+ "decoder.layers.32.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
434
+ "decoder.layers.32.self_attn.out_proj.bias": "pytorch_model-00003-of-00003.bin",
435
+ "decoder.layers.32.self_attn.out_proj.weight": "pytorch_model-00003-of-00003.bin",
436
+ "decoder.layers.32.self_attn.q_proj.bias": "pytorch_model-00003-of-00003.bin",
437
+ "decoder.layers.32.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
438
+ "decoder.layers.32.self_attn.v_proj.bias": "pytorch_model-00003-of-00003.bin",
439
+ "decoder.layers.32.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
440
+ "decoder.layers.32.self_attn_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
441
+ "decoder.layers.32.self_attn_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
442
+ "decoder.layers.33.fc1.bias": "pytorch_model-00003-of-00003.bin",
443
+ "decoder.layers.33.fc1.weight": "pytorch_model-00003-of-00003.bin",
444
+ "decoder.layers.33.fc2.bias": "pytorch_model-00003-of-00003.bin",
445
+ "decoder.layers.33.fc2.weight": "pytorch_model-00003-of-00003.bin",
446
+ "decoder.layers.33.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
447
+ "decoder.layers.33.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
448
+ "decoder.layers.33.self_attn.k_proj.bias": "pytorch_model-00003-of-00003.bin",
449
+ "decoder.layers.33.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
450
+ "decoder.layers.33.self_attn.out_proj.bias": "pytorch_model-00003-of-00003.bin",
451
+ "decoder.layers.33.self_attn.out_proj.weight": "pytorch_model-00003-of-00003.bin",
452
+ "decoder.layers.33.self_attn.q_proj.bias": "pytorch_model-00003-of-00003.bin",
453
+ "decoder.layers.33.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
454
+ "decoder.layers.33.self_attn.v_proj.bias": "pytorch_model-00003-of-00003.bin",
455
+ "decoder.layers.33.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
456
+ "decoder.layers.33.self_attn_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
457
+ "decoder.layers.33.self_attn_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
458
+ "decoder.layers.34.fc1.bias": "pytorch_model-00003-of-00003.bin",
459
+ "decoder.layers.34.fc1.weight": "pytorch_model-00003-of-00003.bin",
460
+ "decoder.layers.34.fc2.bias": "pytorch_model-00003-of-00003.bin",
461
+ "decoder.layers.34.fc2.weight": "pytorch_model-00003-of-00003.bin",
462
+ "decoder.layers.34.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
463
+ "decoder.layers.34.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
464
+ "decoder.layers.34.self_attn.k_proj.bias": "pytorch_model-00003-of-00003.bin",
465
+ "decoder.layers.34.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
466
+ "decoder.layers.34.self_attn.out_proj.bias": "pytorch_model-00003-of-00003.bin",
467
+ "decoder.layers.34.self_attn.out_proj.weight": "pytorch_model-00003-of-00003.bin",
468
+ "decoder.layers.34.self_attn.q_proj.bias": "pytorch_model-00003-of-00003.bin",
469
+ "decoder.layers.34.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
470
+ "decoder.layers.34.self_attn.v_proj.bias": "pytorch_model-00003-of-00003.bin",
471
+ "decoder.layers.34.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
472
+ "decoder.layers.34.self_attn_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
473
+ "decoder.layers.34.self_attn_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
474
+ "decoder.layers.35.fc1.bias": "pytorch_model-00003-of-00003.bin",
475
+ "decoder.layers.35.fc1.weight": "pytorch_model-00003-of-00003.bin",
476
+ "decoder.layers.35.fc2.bias": "pytorch_model-00003-of-00003.bin",
477
+ "decoder.layers.35.fc2.weight": "pytorch_model-00003-of-00003.bin",
478
+ "decoder.layers.35.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
479
+ "decoder.layers.35.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
480
+ "decoder.layers.35.self_attn.k_proj.bias": "pytorch_model-00003-of-00003.bin",
481
+ "decoder.layers.35.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
482
+ "decoder.layers.35.self_attn.out_proj.bias": "pytorch_model-00003-of-00003.bin",
483
+ "decoder.layers.35.self_attn.out_proj.weight": "pytorch_model-00003-of-00003.bin",
484
+ "decoder.layers.35.self_attn.q_proj.bias": "pytorch_model-00003-of-00003.bin",
485
+ "decoder.layers.35.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
486
+ "decoder.layers.35.self_attn.v_proj.bias": "pytorch_model-00003-of-00003.bin",
487
+ "decoder.layers.35.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
488
+ "decoder.layers.35.self_attn_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
489
+ "decoder.layers.35.self_attn_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
490
+ "decoder.layers.36.fc1.bias": "pytorch_model-00003-of-00003.bin",
491
+ "decoder.layers.36.fc1.weight": "pytorch_model-00003-of-00003.bin",
492
+ "decoder.layers.36.fc2.bias": "pytorch_model-00003-of-00003.bin",
493
+ "decoder.layers.36.fc2.weight": "pytorch_model-00003-of-00003.bin",
494
+ "decoder.layers.36.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
495
+ "decoder.layers.36.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
496
+ "decoder.layers.36.self_attn.k_proj.bias": "pytorch_model-00003-of-00003.bin",
497
+ "decoder.layers.36.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
498
+ "decoder.layers.36.self_attn.out_proj.bias": "pytorch_model-00003-of-00003.bin",
499
+ "decoder.layers.36.self_attn.out_proj.weight": "pytorch_model-00003-of-00003.bin",
500
+ "decoder.layers.36.self_attn.q_proj.bias": "pytorch_model-00003-of-00003.bin",
501
+ "decoder.layers.36.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
502
+ "decoder.layers.36.self_attn.v_proj.bias": "pytorch_model-00003-of-00003.bin",
503
+ "decoder.layers.36.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
504
+ "decoder.layers.36.self_attn_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
505
+ "decoder.layers.36.self_attn_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
506
+ "decoder.layers.37.fc1.bias": "pytorch_model-00003-of-00003.bin",
507
+ "decoder.layers.37.fc1.weight": "pytorch_model-00003-of-00003.bin",
508
+ "decoder.layers.37.fc2.bias": "pytorch_model-00003-of-00003.bin",
509
+ "decoder.layers.37.fc2.weight": "pytorch_model-00003-of-00003.bin",
510
+ "decoder.layers.37.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
511
+ "decoder.layers.37.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
512
+ "decoder.layers.37.self_attn.k_proj.bias": "pytorch_model-00003-of-00003.bin",
513
+ "decoder.layers.37.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
514
+ "decoder.layers.37.self_attn.out_proj.bias": "pytorch_model-00003-of-00003.bin",
515
+ "decoder.layers.37.self_attn.out_proj.weight": "pytorch_model-00003-of-00003.bin",
516
+ "decoder.layers.37.self_attn.q_proj.bias": "pytorch_model-00003-of-00003.bin",
517
+ "decoder.layers.37.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
518
+ "decoder.layers.37.self_attn.v_proj.bias": "pytorch_model-00003-of-00003.bin",
519
+ "decoder.layers.37.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
520
+ "decoder.layers.37.self_attn_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
521
+ "decoder.layers.37.self_attn_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
522
+ "decoder.layers.38.fc1.bias": "pytorch_model-00003-of-00003.bin",
523
+ "decoder.layers.38.fc1.weight": "pytorch_model-00003-of-00003.bin",
524
+ "decoder.layers.38.fc2.bias": "pytorch_model-00003-of-00003.bin",
525
+ "decoder.layers.38.fc2.weight": "pytorch_model-00003-of-00003.bin",
526
+ "decoder.layers.38.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
527
+ "decoder.layers.38.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
528
+ "decoder.layers.38.self_attn.k_proj.bias": "pytorch_model-00003-of-00003.bin",
529
+ "decoder.layers.38.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
530
+ "decoder.layers.38.self_attn.out_proj.bias": "pytorch_model-00003-of-00003.bin",
531
+ "decoder.layers.38.self_attn.out_proj.weight": "pytorch_model-00003-of-00003.bin",
532
+ "decoder.layers.38.self_attn.q_proj.bias": "pytorch_model-00003-of-00003.bin",
533
+ "decoder.layers.38.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
534
+ "decoder.layers.38.self_attn.v_proj.bias": "pytorch_model-00003-of-00003.bin",
535
+ "decoder.layers.38.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
536
+ "decoder.layers.38.self_attn_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
537
+ "decoder.layers.38.self_attn_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
538
+ "decoder.layers.39.fc1.bias": "pytorch_model-00003-of-00003.bin",
539
+ "decoder.layers.39.fc1.weight": "pytorch_model-00003-of-00003.bin",
540
+ "decoder.layers.39.fc2.bias": "pytorch_model-00003-of-00003.bin",
541
+ "decoder.layers.39.fc2.weight": "pytorch_model-00003-of-00003.bin",
542
+ "decoder.layers.39.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
543
+ "decoder.layers.39.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
544
+ "decoder.layers.39.self_attn.k_proj.bias": "pytorch_model-00003-of-00003.bin",
545
+ "decoder.layers.39.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
546
+ "decoder.layers.39.self_attn.out_proj.bias": "pytorch_model-00003-of-00003.bin",
547
+ "decoder.layers.39.self_attn.out_proj.weight": "pytorch_model-00003-of-00003.bin",
548
+ "decoder.layers.39.self_attn.q_proj.bias": "pytorch_model-00003-of-00003.bin",
549
+ "decoder.layers.39.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
550
+ "decoder.layers.39.self_attn.v_proj.bias": "pytorch_model-00003-of-00003.bin",
551
+ "decoder.layers.39.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
552
+ "decoder.layers.39.self_attn_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
553
+ "decoder.layers.39.self_attn_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
554
+ "decoder.layers.4.fc1.bias": "pytorch_model-00001-of-00003.bin",
555
+ "decoder.layers.4.fc1.weight": "pytorch_model-00001-of-00003.bin",
556
+ "decoder.layers.4.fc2.bias": "pytorch_model-00001-of-00003.bin",
557
+ "decoder.layers.4.fc2.weight": "pytorch_model-00001-of-00003.bin",
558
+ "decoder.layers.4.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
559
+ "decoder.layers.4.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
560
+ "decoder.layers.4.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
561
+ "decoder.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
562
+ "decoder.layers.4.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
563
+ "decoder.layers.4.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
564
+ "decoder.layers.4.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
565
+ "decoder.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
566
+ "decoder.layers.4.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
567
+ "decoder.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
568
+ "decoder.layers.4.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
569
+ "decoder.layers.4.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
570
+ "decoder.layers.5.fc1.bias": "pytorch_model-00001-of-00003.bin",
571
+ "decoder.layers.5.fc1.weight": "pytorch_model-00001-of-00003.bin",
572
+ "decoder.layers.5.fc2.bias": "pytorch_model-00001-of-00003.bin",
573
+ "decoder.layers.5.fc2.weight": "pytorch_model-00001-of-00003.bin",
574
+ "decoder.layers.5.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
575
+ "decoder.layers.5.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
576
+ "decoder.layers.5.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
577
+ "decoder.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
578
+ "decoder.layers.5.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
579
+ "decoder.layers.5.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
580
+ "decoder.layers.5.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
581
+ "decoder.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
582
+ "decoder.layers.5.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
583
+ "decoder.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
584
+ "decoder.layers.5.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
585
+ "decoder.layers.5.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
586
+ "decoder.layers.6.fc1.bias": "pytorch_model-00001-of-00003.bin",
587
+ "decoder.layers.6.fc1.weight": "pytorch_model-00001-of-00003.bin",
588
+ "decoder.layers.6.fc2.bias": "pytorch_model-00001-of-00003.bin",
589
+ "decoder.layers.6.fc2.weight": "pytorch_model-00001-of-00003.bin",
590
+ "decoder.layers.6.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
591
+ "decoder.layers.6.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
592
+ "decoder.layers.6.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
593
+ "decoder.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
594
+ "decoder.layers.6.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
595
+ "decoder.layers.6.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
596
+ "decoder.layers.6.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
597
+ "decoder.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
598
+ "decoder.layers.6.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
599
+ "decoder.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
600
+ "decoder.layers.6.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
601
+ "decoder.layers.6.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
602
+ "decoder.layers.7.fc1.bias": "pytorch_model-00001-of-00003.bin",
603
+ "decoder.layers.7.fc1.weight": "pytorch_model-00001-of-00003.bin",
604
+ "decoder.layers.7.fc2.bias": "pytorch_model-00001-of-00003.bin",
605
+ "decoder.layers.7.fc2.weight": "pytorch_model-00001-of-00003.bin",
606
+ "decoder.layers.7.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
607
+ "decoder.layers.7.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
608
+ "decoder.layers.7.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
609
+ "decoder.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
610
+ "decoder.layers.7.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
611
+ "decoder.layers.7.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
612
+ "decoder.layers.7.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
613
+ "decoder.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
614
+ "decoder.layers.7.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
615
+ "decoder.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
616
+ "decoder.layers.7.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
617
+ "decoder.layers.7.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
618
+ "decoder.layers.8.fc1.bias": "pytorch_model-00001-of-00003.bin",
619
+ "decoder.layers.8.fc1.weight": "pytorch_model-00001-of-00003.bin",
620
+ "decoder.layers.8.fc2.bias": "pytorch_model-00001-of-00003.bin",
621
+ "decoder.layers.8.fc2.weight": "pytorch_model-00001-of-00003.bin",
622
+ "decoder.layers.8.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
623
+ "decoder.layers.8.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
624
+ "decoder.layers.8.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
625
+ "decoder.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
626
+ "decoder.layers.8.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
627
+ "decoder.layers.8.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
628
+ "decoder.layers.8.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
629
+ "decoder.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
630
+ "decoder.layers.8.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
631
+ "decoder.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
632
+ "decoder.layers.8.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
633
+ "decoder.layers.8.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
634
+ "decoder.layers.9.fc1.bias": "pytorch_model-00001-of-00003.bin",
635
+ "decoder.layers.9.fc1.weight": "pytorch_model-00001-of-00003.bin",
636
+ "decoder.layers.9.fc2.bias": "pytorch_model-00001-of-00003.bin",
637
+ "decoder.layers.9.fc2.weight": "pytorch_model-00001-of-00003.bin",
638
+ "decoder.layers.9.final_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
639
+ "decoder.layers.9.final_layer_norm.weight": "pytorch_model-00001-of-00003.bin",
640
+ "decoder.layers.9.self_attn.k_proj.bias": "pytorch_model-00001-of-00003.bin",
641
+ "decoder.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
642
+ "decoder.layers.9.self_attn.out_proj.bias": "pytorch_model-00001-of-00003.bin",
643
+ "decoder.layers.9.self_attn.out_proj.weight": "pytorch_model-00001-of-00003.bin",
644
+ "decoder.layers.9.self_attn.q_proj.bias": "pytorch_model-00001-of-00003.bin",
645
+ "decoder.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
646
+ "decoder.layers.9.self_attn.v_proj.bias": "pytorch_model-00001-of-00003.bin",
647
+ "decoder.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
648
+ "decoder.layers.9.self_attn_layer_norm.bias": "pytorch_model-00001-of-00003.bin",
649
+ "decoder.layers.9.self_attn_layer_norm.weight": "pytorch_model-00001-of-00003.bin"
650
+ }
651
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}}
tf_model-00001-of-00003.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f1abdee1f888a33d9366f0c4c263c3cdd0c4aa6b88007d6fb8e422b7d242c91
3
+ size 9975229760
tf_model-00002-of-00003.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6044db30527267d1bc072d816fa64cbc81efd0b149dcbbd1c41b245b997ad384
3
+ size 9858987768
tf_model-00003-of-00003.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65aea614891119664aa51188867b5a11e89f82d14849b1c23935b7572e4e70c2
3
+ size 5873414216
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a20cc56a6b64001adf2eb10a83825058b50bc8b4c33d5e8cbcb31ffba63467eb
3
+ size 25707608064
tf_model.h5.index.json ADDED
@@ -0,0 +1,651 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 25706946560
4
+ },
5
+ "weight_map": {
6
+ "tfopt_for_causal_lm/model/decoder/embed_positions/weight:0": "tf_model-00001-of-00003.h5",
7
+ "tfopt_for_causal_lm/model/decoder/embed_tokens/weight:0": "tf_model-00001-of-00003.h5",
8
+ "tfopt_for_causal_lm/model/decoder/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
9
+ "tfopt_for_causal_lm/model/decoder/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
10
+ "tfopt_for_causal_lm/model/decoder/layers.0/fc1/bias:0": "tf_model-00001-of-00003.h5",
11
+ "tfopt_for_causal_lm/model/decoder/layers.0/fc1/kernel:0": "tf_model-00001-of-00003.h5",
12
+ "tfopt_for_causal_lm/model/decoder/layers.0/fc2/bias:0": "tf_model-00001-of-00003.h5",
13
+ "tfopt_for_causal_lm/model/decoder/layers.0/fc2/kernel:0": "tf_model-00001-of-00003.h5",
14
+ "tfopt_for_causal_lm/model/decoder/layers.0/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
15
+ "tfopt_for_causal_lm/model/decoder/layers.0/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
16
+ "tfopt_for_causal_lm/model/decoder/layers.0/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
17
+ "tfopt_for_causal_lm/model/decoder/layers.0/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
18
+ "tfopt_for_causal_lm/model/decoder/layers.0/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
19
+ "tfopt_for_causal_lm/model/decoder/layers.0/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
20
+ "tfopt_for_causal_lm/model/decoder/layers.0/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
21
+ "tfopt_for_causal_lm/model/decoder/layers.0/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
22
+ "tfopt_for_causal_lm/model/decoder/layers.0/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
23
+ "tfopt_for_causal_lm/model/decoder/layers.0/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
24
+ "tfopt_for_causal_lm/model/decoder/layers.0/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
25
+ "tfopt_for_causal_lm/model/decoder/layers.0/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
26
+ "tfopt_for_causal_lm/model/decoder/layers.1/fc1/bias:0": "tf_model-00001-of-00003.h5",
27
+ "tfopt_for_causal_lm/model/decoder/layers.1/fc1/kernel:0": "tf_model-00001-of-00003.h5",
28
+ "tfopt_for_causal_lm/model/decoder/layers.1/fc2/bias:0": "tf_model-00001-of-00003.h5",
29
+ "tfopt_for_causal_lm/model/decoder/layers.1/fc2/kernel:0": "tf_model-00001-of-00003.h5",
30
+ "tfopt_for_causal_lm/model/decoder/layers.1/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
31
+ "tfopt_for_causal_lm/model/decoder/layers.1/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
32
+ "tfopt_for_causal_lm/model/decoder/layers.1/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
33
+ "tfopt_for_causal_lm/model/decoder/layers.1/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
34
+ "tfopt_for_causal_lm/model/decoder/layers.1/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
35
+ "tfopt_for_causal_lm/model/decoder/layers.1/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
36
+ "tfopt_for_causal_lm/model/decoder/layers.1/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
37
+ "tfopt_for_causal_lm/model/decoder/layers.1/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
38
+ "tfopt_for_causal_lm/model/decoder/layers.1/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
39
+ "tfopt_for_causal_lm/model/decoder/layers.1/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
40
+ "tfopt_for_causal_lm/model/decoder/layers.1/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
41
+ "tfopt_for_causal_lm/model/decoder/layers.1/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
42
+ "tfopt_for_causal_lm/model/decoder/layers.10/fc1/bias:0": "tf_model-00001-of-00003.h5",
43
+ "tfopt_for_causal_lm/model/decoder/layers.10/fc1/kernel:0": "tf_model-00001-of-00003.h5",
44
+ "tfopt_for_causal_lm/model/decoder/layers.10/fc2/bias:0": "tf_model-00001-of-00003.h5",
45
+ "tfopt_for_causal_lm/model/decoder/layers.10/fc2/kernel:0": "tf_model-00001-of-00003.h5",
46
+ "tfopt_for_causal_lm/model/decoder/layers.10/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
47
+ "tfopt_for_causal_lm/model/decoder/layers.10/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
48
+ "tfopt_for_causal_lm/model/decoder/layers.10/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
49
+ "tfopt_for_causal_lm/model/decoder/layers.10/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
50
+ "tfopt_for_causal_lm/model/decoder/layers.10/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
51
+ "tfopt_for_causal_lm/model/decoder/layers.10/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
52
+ "tfopt_for_causal_lm/model/decoder/layers.10/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
53
+ "tfopt_for_causal_lm/model/decoder/layers.10/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
54
+ "tfopt_for_causal_lm/model/decoder/layers.10/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
55
+ "tfopt_for_causal_lm/model/decoder/layers.10/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
56
+ "tfopt_for_causal_lm/model/decoder/layers.10/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
57
+ "tfopt_for_causal_lm/model/decoder/layers.10/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
58
+ "tfopt_for_causal_lm/model/decoder/layers.11/fc1/bias:0": "tf_model-00001-of-00003.h5",
59
+ "tfopt_for_causal_lm/model/decoder/layers.11/fc1/kernel:0": "tf_model-00001-of-00003.h5",
60
+ "tfopt_for_causal_lm/model/decoder/layers.11/fc2/bias:0": "tf_model-00001-of-00003.h5",
61
+ "tfopt_for_causal_lm/model/decoder/layers.11/fc2/kernel:0": "tf_model-00001-of-00003.h5",
62
+ "tfopt_for_causal_lm/model/decoder/layers.11/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
63
+ "tfopt_for_causal_lm/model/decoder/layers.11/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
64
+ "tfopt_for_causal_lm/model/decoder/layers.11/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
65
+ "tfopt_for_causal_lm/model/decoder/layers.11/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
66
+ "tfopt_for_causal_lm/model/decoder/layers.11/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
67
+ "tfopt_for_causal_lm/model/decoder/layers.11/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
68
+ "tfopt_for_causal_lm/model/decoder/layers.11/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
69
+ "tfopt_for_causal_lm/model/decoder/layers.11/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
70
+ "tfopt_for_causal_lm/model/decoder/layers.11/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
71
+ "tfopt_for_causal_lm/model/decoder/layers.11/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
72
+ "tfopt_for_causal_lm/model/decoder/layers.11/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
73
+ "tfopt_for_causal_lm/model/decoder/layers.11/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
74
+ "tfopt_for_causal_lm/model/decoder/layers.12/fc1/bias:0": "tf_model-00001-of-00003.h5",
75
+ "tfopt_for_causal_lm/model/decoder/layers.12/fc1/kernel:0": "tf_model-00001-of-00003.h5",
76
+ "tfopt_for_causal_lm/model/decoder/layers.12/fc2/bias:0": "tf_model-00001-of-00003.h5",
77
+ "tfopt_for_causal_lm/model/decoder/layers.12/fc2/kernel:0": "tf_model-00001-of-00003.h5",
78
+ "tfopt_for_causal_lm/model/decoder/layers.12/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
79
+ "tfopt_for_causal_lm/model/decoder/layers.12/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
80
+ "tfopt_for_causal_lm/model/decoder/layers.12/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
81
+ "tfopt_for_causal_lm/model/decoder/layers.12/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
82
+ "tfopt_for_causal_lm/model/decoder/layers.12/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
83
+ "tfopt_for_causal_lm/model/decoder/layers.12/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
84
+ "tfopt_for_causal_lm/model/decoder/layers.12/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
85
+ "tfopt_for_causal_lm/model/decoder/layers.12/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
86
+ "tfopt_for_causal_lm/model/decoder/layers.12/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
87
+ "tfopt_for_causal_lm/model/decoder/layers.12/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
88
+ "tfopt_for_causal_lm/model/decoder/layers.12/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
89
+ "tfopt_for_causal_lm/model/decoder/layers.12/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
90
+ "tfopt_for_causal_lm/model/decoder/layers.13/fc1/bias:0": "tf_model-00001-of-00003.h5",
91
+ "tfopt_for_causal_lm/model/decoder/layers.13/fc1/kernel:0": "tf_model-00001-of-00003.h5",
92
+ "tfopt_for_causal_lm/model/decoder/layers.13/fc2/bias:0": "tf_model-00001-of-00003.h5",
93
+ "tfopt_for_causal_lm/model/decoder/layers.13/fc2/kernel:0": "tf_model-00001-of-00003.h5",
94
+ "tfopt_for_causal_lm/model/decoder/layers.13/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
95
+ "tfopt_for_causal_lm/model/decoder/layers.13/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
96
+ "tfopt_for_causal_lm/model/decoder/layers.13/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
97
+ "tfopt_for_causal_lm/model/decoder/layers.13/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
98
+ "tfopt_for_causal_lm/model/decoder/layers.13/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
99
+ "tfopt_for_causal_lm/model/decoder/layers.13/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
100
+ "tfopt_for_causal_lm/model/decoder/layers.13/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
101
+ "tfopt_for_causal_lm/model/decoder/layers.13/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
102
+ "tfopt_for_causal_lm/model/decoder/layers.13/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
103
+ "tfopt_for_causal_lm/model/decoder/layers.13/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
104
+ "tfopt_for_causal_lm/model/decoder/layers.13/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
105
+ "tfopt_for_causal_lm/model/decoder/layers.13/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
106
+ "tfopt_for_causal_lm/model/decoder/layers.14/fc1/bias:0": "tf_model-00001-of-00003.h5",
107
+ "tfopt_for_causal_lm/model/decoder/layers.14/fc1/kernel:0": "tf_model-00001-of-00003.h5",
108
+ "tfopt_for_causal_lm/model/decoder/layers.14/fc2/bias:0": "tf_model-00001-of-00003.h5",
109
+ "tfopt_for_causal_lm/model/decoder/layers.14/fc2/kernel:0": "tf_model-00001-of-00003.h5",
110
+ "tfopt_for_causal_lm/model/decoder/layers.14/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
111
+ "tfopt_for_causal_lm/model/decoder/layers.14/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
112
+ "tfopt_for_causal_lm/model/decoder/layers.14/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
113
+ "tfopt_for_causal_lm/model/decoder/layers.14/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
114
+ "tfopt_for_causal_lm/model/decoder/layers.14/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
115
+ "tfopt_for_causal_lm/model/decoder/layers.14/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
116
+ "tfopt_for_causal_lm/model/decoder/layers.14/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
117
+ "tfopt_for_causal_lm/model/decoder/layers.14/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
118
+ "tfopt_for_causal_lm/model/decoder/layers.14/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
119
+ "tfopt_for_causal_lm/model/decoder/layers.14/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
120
+ "tfopt_for_causal_lm/model/decoder/layers.14/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
121
+ "tfopt_for_causal_lm/model/decoder/layers.14/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
122
+ "tfopt_for_causal_lm/model/decoder/layers.15/fc1/bias:0": "tf_model-00002-of-00003.h5",
123
+ "tfopt_for_causal_lm/model/decoder/layers.15/fc1/kernel:0": "tf_model-00002-of-00003.h5",
124
+ "tfopt_for_causal_lm/model/decoder/layers.15/fc2/bias:0": "tf_model-00002-of-00003.h5",
125
+ "tfopt_for_causal_lm/model/decoder/layers.15/fc2/kernel:0": "tf_model-00002-of-00003.h5",
126
+ "tfopt_for_causal_lm/model/decoder/layers.15/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
127
+ "tfopt_for_causal_lm/model/decoder/layers.15/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
128
+ "tfopt_for_causal_lm/model/decoder/layers.15/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
129
+ "tfopt_for_causal_lm/model/decoder/layers.15/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
130
+ "tfopt_for_causal_lm/model/decoder/layers.15/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
131
+ "tfopt_for_causal_lm/model/decoder/layers.15/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
132
+ "tfopt_for_causal_lm/model/decoder/layers.15/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
133
+ "tfopt_for_causal_lm/model/decoder/layers.15/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
134
+ "tfopt_for_causal_lm/model/decoder/layers.15/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
135
+ "tfopt_for_causal_lm/model/decoder/layers.15/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
136
+ "tfopt_for_causal_lm/model/decoder/layers.15/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
137
+ "tfopt_for_causal_lm/model/decoder/layers.15/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
138
+ "tfopt_for_causal_lm/model/decoder/layers.16/fc1/bias:0": "tf_model-00002-of-00003.h5",
139
+ "tfopt_for_causal_lm/model/decoder/layers.16/fc1/kernel:0": "tf_model-00002-of-00003.h5",
140
+ "tfopt_for_causal_lm/model/decoder/layers.16/fc2/bias:0": "tf_model-00002-of-00003.h5",
141
+ "tfopt_for_causal_lm/model/decoder/layers.16/fc2/kernel:0": "tf_model-00002-of-00003.h5",
142
+ "tfopt_for_causal_lm/model/decoder/layers.16/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
143
+ "tfopt_for_causal_lm/model/decoder/layers.16/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
144
+ "tfopt_for_causal_lm/model/decoder/layers.16/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
145
+ "tfopt_for_causal_lm/model/decoder/layers.16/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
146
+ "tfopt_for_causal_lm/model/decoder/layers.16/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
147
+ "tfopt_for_causal_lm/model/decoder/layers.16/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
148
+ "tfopt_for_causal_lm/model/decoder/layers.16/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
149
+ "tfopt_for_causal_lm/model/decoder/layers.16/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
150
+ "tfopt_for_causal_lm/model/decoder/layers.16/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
151
+ "tfopt_for_causal_lm/model/decoder/layers.16/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
152
+ "tfopt_for_causal_lm/model/decoder/layers.16/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
153
+ "tfopt_for_causal_lm/model/decoder/layers.16/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
154
+ "tfopt_for_causal_lm/model/decoder/layers.17/fc1/bias:0": "tf_model-00002-of-00003.h5",
155
+ "tfopt_for_causal_lm/model/decoder/layers.17/fc1/kernel:0": "tf_model-00002-of-00003.h5",
156
+ "tfopt_for_causal_lm/model/decoder/layers.17/fc2/bias:0": "tf_model-00002-of-00003.h5",
157
+ "tfopt_for_causal_lm/model/decoder/layers.17/fc2/kernel:0": "tf_model-00002-of-00003.h5",
158
+ "tfopt_for_causal_lm/model/decoder/layers.17/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
159
+ "tfopt_for_causal_lm/model/decoder/layers.17/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
160
+ "tfopt_for_causal_lm/model/decoder/layers.17/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
161
+ "tfopt_for_causal_lm/model/decoder/layers.17/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
162
+ "tfopt_for_causal_lm/model/decoder/layers.17/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
163
+ "tfopt_for_causal_lm/model/decoder/layers.17/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
164
+ "tfopt_for_causal_lm/model/decoder/layers.17/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
165
+ "tfopt_for_causal_lm/model/decoder/layers.17/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
166
+ "tfopt_for_causal_lm/model/decoder/layers.17/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
167
+ "tfopt_for_causal_lm/model/decoder/layers.17/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
168
+ "tfopt_for_causal_lm/model/decoder/layers.17/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
169
+ "tfopt_for_causal_lm/model/decoder/layers.17/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
170
+ "tfopt_for_causal_lm/model/decoder/layers.18/fc1/bias:0": "tf_model-00002-of-00003.h5",
171
+ "tfopt_for_causal_lm/model/decoder/layers.18/fc1/kernel:0": "tf_model-00002-of-00003.h5",
172
+ "tfopt_for_causal_lm/model/decoder/layers.18/fc2/bias:0": "tf_model-00002-of-00003.h5",
173
+ "tfopt_for_causal_lm/model/decoder/layers.18/fc2/kernel:0": "tf_model-00002-of-00003.h5",
174
+ "tfopt_for_causal_lm/model/decoder/layers.18/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
175
+ "tfopt_for_causal_lm/model/decoder/layers.18/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
176
+ "tfopt_for_causal_lm/model/decoder/layers.18/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
177
+ "tfopt_for_causal_lm/model/decoder/layers.18/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
178
+ "tfopt_for_causal_lm/model/decoder/layers.18/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
179
+ "tfopt_for_causal_lm/model/decoder/layers.18/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
180
+ "tfopt_for_causal_lm/model/decoder/layers.18/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
181
+ "tfopt_for_causal_lm/model/decoder/layers.18/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
182
+ "tfopt_for_causal_lm/model/decoder/layers.18/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
183
+ "tfopt_for_causal_lm/model/decoder/layers.18/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
184
+ "tfopt_for_causal_lm/model/decoder/layers.18/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
185
+ "tfopt_for_causal_lm/model/decoder/layers.18/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
186
+ "tfopt_for_causal_lm/model/decoder/layers.19/fc1/bias:0": "tf_model-00002-of-00003.h5",
187
+ "tfopt_for_causal_lm/model/decoder/layers.19/fc1/kernel:0": "tf_model-00002-of-00003.h5",
188
+ "tfopt_for_causal_lm/model/decoder/layers.19/fc2/bias:0": "tf_model-00002-of-00003.h5",
189
+ "tfopt_for_causal_lm/model/decoder/layers.19/fc2/kernel:0": "tf_model-00002-of-00003.h5",
190
+ "tfopt_for_causal_lm/model/decoder/layers.19/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
191
+ "tfopt_for_causal_lm/model/decoder/layers.19/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
192
+ "tfopt_for_causal_lm/model/decoder/layers.19/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
193
+ "tfopt_for_causal_lm/model/decoder/layers.19/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
194
+ "tfopt_for_causal_lm/model/decoder/layers.19/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
195
+ "tfopt_for_causal_lm/model/decoder/layers.19/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
196
+ "tfopt_for_causal_lm/model/decoder/layers.19/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
197
+ "tfopt_for_causal_lm/model/decoder/layers.19/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
198
+ "tfopt_for_causal_lm/model/decoder/layers.19/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
199
+ "tfopt_for_causal_lm/model/decoder/layers.19/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
200
+ "tfopt_for_causal_lm/model/decoder/layers.19/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
201
+ "tfopt_for_causal_lm/model/decoder/layers.19/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
202
+ "tfopt_for_causal_lm/model/decoder/layers.2/fc1/bias:0": "tf_model-00001-of-00003.h5",
203
+ "tfopt_for_causal_lm/model/decoder/layers.2/fc1/kernel:0": "tf_model-00001-of-00003.h5",
204
+ "tfopt_for_causal_lm/model/decoder/layers.2/fc2/bias:0": "tf_model-00001-of-00003.h5",
205
+ "tfopt_for_causal_lm/model/decoder/layers.2/fc2/kernel:0": "tf_model-00001-of-00003.h5",
206
+ "tfopt_for_causal_lm/model/decoder/layers.2/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
207
+ "tfopt_for_causal_lm/model/decoder/layers.2/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
208
+ "tfopt_for_causal_lm/model/decoder/layers.2/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
209
+ "tfopt_for_causal_lm/model/decoder/layers.2/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
210
+ "tfopt_for_causal_lm/model/decoder/layers.2/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
211
+ "tfopt_for_causal_lm/model/decoder/layers.2/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
212
+ "tfopt_for_causal_lm/model/decoder/layers.2/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
213
+ "tfopt_for_causal_lm/model/decoder/layers.2/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
214
+ "tfopt_for_causal_lm/model/decoder/layers.2/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
215
+ "tfopt_for_causal_lm/model/decoder/layers.2/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
216
+ "tfopt_for_causal_lm/model/decoder/layers.2/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
217
+ "tfopt_for_causal_lm/model/decoder/layers.2/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
218
+ "tfopt_for_causal_lm/model/decoder/layers.20/fc1/bias:0": "tf_model-00002-of-00003.h5",
219
+ "tfopt_for_causal_lm/model/decoder/layers.20/fc1/kernel:0": "tf_model-00002-of-00003.h5",
220
+ "tfopt_for_causal_lm/model/decoder/layers.20/fc2/bias:0": "tf_model-00002-of-00003.h5",
221
+ "tfopt_for_causal_lm/model/decoder/layers.20/fc2/kernel:0": "tf_model-00002-of-00003.h5",
222
+ "tfopt_for_causal_lm/model/decoder/layers.20/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
223
+ "tfopt_for_causal_lm/model/decoder/layers.20/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
224
+ "tfopt_for_causal_lm/model/decoder/layers.20/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
225
+ "tfopt_for_causal_lm/model/decoder/layers.20/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
226
+ "tfopt_for_causal_lm/model/decoder/layers.20/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
227
+ "tfopt_for_causal_lm/model/decoder/layers.20/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
228
+ "tfopt_for_causal_lm/model/decoder/layers.20/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
229
+ "tfopt_for_causal_lm/model/decoder/layers.20/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
230
+ "tfopt_for_causal_lm/model/decoder/layers.20/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
231
+ "tfopt_for_causal_lm/model/decoder/layers.20/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
232
+ "tfopt_for_causal_lm/model/decoder/layers.20/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
233
+ "tfopt_for_causal_lm/model/decoder/layers.20/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
234
+ "tfopt_for_causal_lm/model/decoder/layers.21/fc1/bias:0": "tf_model-00002-of-00003.h5",
235
+ "tfopt_for_causal_lm/model/decoder/layers.21/fc1/kernel:0": "tf_model-00002-of-00003.h5",
236
+ "tfopt_for_causal_lm/model/decoder/layers.21/fc2/bias:0": "tf_model-00002-of-00003.h5",
237
+ "tfopt_for_causal_lm/model/decoder/layers.21/fc2/kernel:0": "tf_model-00002-of-00003.h5",
238
+ "tfopt_for_causal_lm/model/decoder/layers.21/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
239
+ "tfopt_for_causal_lm/model/decoder/layers.21/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
240
+ "tfopt_for_causal_lm/model/decoder/layers.21/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
241
+ "tfopt_for_causal_lm/model/decoder/layers.21/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
242
+ "tfopt_for_causal_lm/model/decoder/layers.21/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
243
+ "tfopt_for_causal_lm/model/decoder/layers.21/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
244
+ "tfopt_for_causal_lm/model/decoder/layers.21/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
245
+ "tfopt_for_causal_lm/model/decoder/layers.21/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
246
+ "tfopt_for_causal_lm/model/decoder/layers.21/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
247
+ "tfopt_for_causal_lm/model/decoder/layers.21/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
248
+ "tfopt_for_causal_lm/model/decoder/layers.21/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
249
+ "tfopt_for_causal_lm/model/decoder/layers.21/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
250
+ "tfopt_for_causal_lm/model/decoder/layers.22/fc1/bias:0": "tf_model-00002-of-00003.h5",
251
+ "tfopt_for_causal_lm/model/decoder/layers.22/fc1/kernel:0": "tf_model-00002-of-00003.h5",
252
+ "tfopt_for_causal_lm/model/decoder/layers.22/fc2/bias:0": "tf_model-00002-of-00003.h5",
253
+ "tfopt_for_causal_lm/model/decoder/layers.22/fc2/kernel:0": "tf_model-00002-of-00003.h5",
254
+ "tfopt_for_causal_lm/model/decoder/layers.22/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
255
+ "tfopt_for_causal_lm/model/decoder/layers.22/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
256
+ "tfopt_for_causal_lm/model/decoder/layers.22/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
257
+ "tfopt_for_causal_lm/model/decoder/layers.22/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
258
+ "tfopt_for_causal_lm/model/decoder/layers.22/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
259
+ "tfopt_for_causal_lm/model/decoder/layers.22/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
260
+ "tfopt_for_causal_lm/model/decoder/layers.22/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
261
+ "tfopt_for_causal_lm/model/decoder/layers.22/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
262
+ "tfopt_for_causal_lm/model/decoder/layers.22/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
263
+ "tfopt_for_causal_lm/model/decoder/layers.22/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
264
+ "tfopt_for_causal_lm/model/decoder/layers.22/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
265
+ "tfopt_for_causal_lm/model/decoder/layers.22/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
266
+ "tfopt_for_causal_lm/model/decoder/layers.23/fc1/bias:0": "tf_model-00002-of-00003.h5",
267
+ "tfopt_for_causal_lm/model/decoder/layers.23/fc1/kernel:0": "tf_model-00002-of-00003.h5",
268
+ "tfopt_for_causal_lm/model/decoder/layers.23/fc2/bias:0": "tf_model-00002-of-00003.h5",
269
+ "tfopt_for_causal_lm/model/decoder/layers.23/fc2/kernel:0": "tf_model-00002-of-00003.h5",
270
+ "tfopt_for_causal_lm/model/decoder/layers.23/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
271
+ "tfopt_for_causal_lm/model/decoder/layers.23/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
272
+ "tfopt_for_causal_lm/model/decoder/layers.23/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
273
+ "tfopt_for_causal_lm/model/decoder/layers.23/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
274
+ "tfopt_for_causal_lm/model/decoder/layers.23/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
275
+ "tfopt_for_causal_lm/model/decoder/layers.23/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
276
+ "tfopt_for_causal_lm/model/decoder/layers.23/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
277
+ "tfopt_for_causal_lm/model/decoder/layers.23/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
278
+ "tfopt_for_causal_lm/model/decoder/layers.23/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
279
+ "tfopt_for_causal_lm/model/decoder/layers.23/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
280
+ "tfopt_for_causal_lm/model/decoder/layers.23/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
281
+ "tfopt_for_causal_lm/model/decoder/layers.23/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
282
+ "tfopt_for_causal_lm/model/decoder/layers.24/fc1/bias:0": "tf_model-00002-of-00003.h5",
283
+ "tfopt_for_causal_lm/model/decoder/layers.24/fc1/kernel:0": "tf_model-00002-of-00003.h5",
284
+ "tfopt_for_causal_lm/model/decoder/layers.24/fc2/bias:0": "tf_model-00002-of-00003.h5",
285
+ "tfopt_for_causal_lm/model/decoder/layers.24/fc2/kernel:0": "tf_model-00002-of-00003.h5",
286
+ "tfopt_for_causal_lm/model/decoder/layers.24/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
287
+ "tfopt_for_causal_lm/model/decoder/layers.24/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
288
+ "tfopt_for_causal_lm/model/decoder/layers.24/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
289
+ "tfopt_for_causal_lm/model/decoder/layers.24/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
290
+ "tfopt_for_causal_lm/model/decoder/layers.24/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
291
+ "tfopt_for_causal_lm/model/decoder/layers.24/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
292
+ "tfopt_for_causal_lm/model/decoder/layers.24/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
293
+ "tfopt_for_causal_lm/model/decoder/layers.24/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
294
+ "tfopt_for_causal_lm/model/decoder/layers.24/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
295
+ "tfopt_for_causal_lm/model/decoder/layers.24/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
296
+ "tfopt_for_causal_lm/model/decoder/layers.24/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
297
+ "tfopt_for_causal_lm/model/decoder/layers.24/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
298
+ "tfopt_for_causal_lm/model/decoder/layers.25/fc1/bias:0": "tf_model-00002-of-00003.h5",
299
+ "tfopt_for_causal_lm/model/decoder/layers.25/fc1/kernel:0": "tf_model-00002-of-00003.h5",
300
+ "tfopt_for_causal_lm/model/decoder/layers.25/fc2/bias:0": "tf_model-00002-of-00003.h5",
301
+ "tfopt_for_causal_lm/model/decoder/layers.25/fc2/kernel:0": "tf_model-00002-of-00003.h5",
302
+ "tfopt_for_causal_lm/model/decoder/layers.25/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
303
+ "tfopt_for_causal_lm/model/decoder/layers.25/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
304
+ "tfopt_for_causal_lm/model/decoder/layers.25/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
305
+ "tfopt_for_causal_lm/model/decoder/layers.25/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
306
+ "tfopt_for_causal_lm/model/decoder/layers.25/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
307
+ "tfopt_for_causal_lm/model/decoder/layers.25/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
308
+ "tfopt_for_causal_lm/model/decoder/layers.25/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
309
+ "tfopt_for_causal_lm/model/decoder/layers.25/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
310
+ "tfopt_for_causal_lm/model/decoder/layers.25/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
311
+ "tfopt_for_causal_lm/model/decoder/layers.25/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
312
+ "tfopt_for_causal_lm/model/decoder/layers.25/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
313
+ "tfopt_for_causal_lm/model/decoder/layers.25/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
314
+ "tfopt_for_causal_lm/model/decoder/layers.26/fc1/bias:0": "tf_model-00002-of-00003.h5",
315
+ "tfopt_for_causal_lm/model/decoder/layers.26/fc1/kernel:0": "tf_model-00002-of-00003.h5",
316
+ "tfopt_for_causal_lm/model/decoder/layers.26/fc2/bias:0": "tf_model-00002-of-00003.h5",
317
+ "tfopt_for_causal_lm/model/decoder/layers.26/fc2/kernel:0": "tf_model-00002-of-00003.h5",
318
+ "tfopt_for_causal_lm/model/decoder/layers.26/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
319
+ "tfopt_for_causal_lm/model/decoder/layers.26/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
320
+ "tfopt_for_causal_lm/model/decoder/layers.26/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
321
+ "tfopt_for_causal_lm/model/decoder/layers.26/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
322
+ "tfopt_for_causal_lm/model/decoder/layers.26/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
323
+ "tfopt_for_causal_lm/model/decoder/layers.26/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
324
+ "tfopt_for_causal_lm/model/decoder/layers.26/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
325
+ "tfopt_for_causal_lm/model/decoder/layers.26/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
326
+ "tfopt_for_causal_lm/model/decoder/layers.26/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
327
+ "tfopt_for_causal_lm/model/decoder/layers.26/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
328
+ "tfopt_for_causal_lm/model/decoder/layers.26/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
329
+ "tfopt_for_causal_lm/model/decoder/layers.26/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
330
+ "tfopt_for_causal_lm/model/decoder/layers.27/fc1/bias:0": "tf_model-00002-of-00003.h5",
331
+ "tfopt_for_causal_lm/model/decoder/layers.27/fc1/kernel:0": "tf_model-00002-of-00003.h5",
332
+ "tfopt_for_causal_lm/model/decoder/layers.27/fc2/bias:0": "tf_model-00002-of-00003.h5",
333
+ "tfopt_for_causal_lm/model/decoder/layers.27/fc2/kernel:0": "tf_model-00002-of-00003.h5",
334
+ "tfopt_for_causal_lm/model/decoder/layers.27/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
335
+ "tfopt_for_causal_lm/model/decoder/layers.27/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
336
+ "tfopt_for_causal_lm/model/decoder/layers.27/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
337
+ "tfopt_for_causal_lm/model/decoder/layers.27/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
338
+ "tfopt_for_causal_lm/model/decoder/layers.27/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
339
+ "tfopt_for_causal_lm/model/decoder/layers.27/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
340
+ "tfopt_for_causal_lm/model/decoder/layers.27/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
341
+ "tfopt_for_causal_lm/model/decoder/layers.27/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
342
+ "tfopt_for_causal_lm/model/decoder/layers.27/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
343
+ "tfopt_for_causal_lm/model/decoder/layers.27/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
344
+ "tfopt_for_causal_lm/model/decoder/layers.27/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
345
+ "tfopt_for_causal_lm/model/decoder/layers.27/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
346
+ "tfopt_for_causal_lm/model/decoder/layers.28/fc1/bias:0": "tf_model-00002-of-00003.h5",
347
+ "tfopt_for_causal_lm/model/decoder/layers.28/fc1/kernel:0": "tf_model-00002-of-00003.h5",
348
+ "tfopt_for_causal_lm/model/decoder/layers.28/fc2/bias:0": "tf_model-00002-of-00003.h5",
349
+ "tfopt_for_causal_lm/model/decoder/layers.28/fc2/kernel:0": "tf_model-00002-of-00003.h5",
350
+ "tfopt_for_causal_lm/model/decoder/layers.28/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
351
+ "tfopt_for_causal_lm/model/decoder/layers.28/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
352
+ "tfopt_for_causal_lm/model/decoder/layers.28/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
353
+ "tfopt_for_causal_lm/model/decoder/layers.28/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
354
+ "tfopt_for_causal_lm/model/decoder/layers.28/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
355
+ "tfopt_for_causal_lm/model/decoder/layers.28/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
356
+ "tfopt_for_causal_lm/model/decoder/layers.28/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
357
+ "tfopt_for_causal_lm/model/decoder/layers.28/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
358
+ "tfopt_for_causal_lm/model/decoder/layers.28/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
359
+ "tfopt_for_causal_lm/model/decoder/layers.28/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
360
+ "tfopt_for_causal_lm/model/decoder/layers.28/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
361
+ "tfopt_for_causal_lm/model/decoder/layers.28/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
362
+ "tfopt_for_causal_lm/model/decoder/layers.29/fc1/bias:0": "tf_model-00002-of-00003.h5",
363
+ "tfopt_for_causal_lm/model/decoder/layers.29/fc1/kernel:0": "tf_model-00002-of-00003.h5",
364
+ "tfopt_for_causal_lm/model/decoder/layers.29/fc2/bias:0": "tf_model-00002-of-00003.h5",
365
+ "tfopt_for_causal_lm/model/decoder/layers.29/fc2/kernel:0": "tf_model-00002-of-00003.h5",
366
+ "tfopt_for_causal_lm/model/decoder/layers.29/final_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
367
+ "tfopt_for_causal_lm/model/decoder/layers.29/final_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
368
+ "tfopt_for_causal_lm/model/decoder/layers.29/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
369
+ "tfopt_for_causal_lm/model/decoder/layers.29/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
370
+ "tfopt_for_causal_lm/model/decoder/layers.29/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
371
+ "tfopt_for_causal_lm/model/decoder/layers.29/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
372
+ "tfopt_for_causal_lm/model/decoder/layers.29/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
373
+ "tfopt_for_causal_lm/model/decoder/layers.29/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
374
+ "tfopt_for_causal_lm/model/decoder/layers.29/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
375
+ "tfopt_for_causal_lm/model/decoder/layers.29/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
376
+ "tfopt_for_causal_lm/model/decoder/layers.29/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
377
+ "tfopt_for_causal_lm/model/decoder/layers.29/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
378
+ "tfopt_for_causal_lm/model/decoder/layers.3/fc1/bias:0": "tf_model-00001-of-00003.h5",
379
+ "tfopt_for_causal_lm/model/decoder/layers.3/fc1/kernel:0": "tf_model-00001-of-00003.h5",
380
+ "tfopt_for_causal_lm/model/decoder/layers.3/fc2/bias:0": "tf_model-00001-of-00003.h5",
381
+ "tfopt_for_causal_lm/model/decoder/layers.3/fc2/kernel:0": "tf_model-00001-of-00003.h5",
382
+ "tfopt_for_causal_lm/model/decoder/layers.3/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
383
+ "tfopt_for_causal_lm/model/decoder/layers.3/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
384
+ "tfopt_for_causal_lm/model/decoder/layers.3/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
385
+ "tfopt_for_causal_lm/model/decoder/layers.3/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
386
+ "tfopt_for_causal_lm/model/decoder/layers.3/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
387
+ "tfopt_for_causal_lm/model/decoder/layers.3/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
388
+ "tfopt_for_causal_lm/model/decoder/layers.3/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
389
+ "tfopt_for_causal_lm/model/decoder/layers.3/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
390
+ "tfopt_for_causal_lm/model/decoder/layers.3/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
391
+ "tfopt_for_causal_lm/model/decoder/layers.3/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
392
+ "tfopt_for_causal_lm/model/decoder/layers.3/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
393
+ "tfopt_for_causal_lm/model/decoder/layers.3/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
394
+ "tfopt_for_causal_lm/model/decoder/layers.30/fc1/bias:0": "tf_model-00002-of-00003.h5",
395
+ "tfopt_for_causal_lm/model/decoder/layers.30/fc1/kernel:0": "tf_model-00002-of-00003.h5",
396
+ "tfopt_for_causal_lm/model/decoder/layers.30/fc2/bias:0": "tf_model-00003-of-00003.h5",
397
+ "tfopt_for_causal_lm/model/decoder/layers.30/fc2/kernel:0": "tf_model-00003-of-00003.h5",
398
+ "tfopt_for_causal_lm/model/decoder/layers.30/final_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
399
+ "tfopt_for_causal_lm/model/decoder/layers.30/final_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
400
+ "tfopt_for_causal_lm/model/decoder/layers.30/self_attn/k_proj/bias:0": "tf_model-00002-of-00003.h5",
401
+ "tfopt_for_causal_lm/model/decoder/layers.30/self_attn/k_proj/kernel:0": "tf_model-00002-of-00003.h5",
402
+ "tfopt_for_causal_lm/model/decoder/layers.30/self_attn/out_proj/bias:0": "tf_model-00002-of-00003.h5",
403
+ "tfopt_for_causal_lm/model/decoder/layers.30/self_attn/out_proj/kernel:0": "tf_model-00002-of-00003.h5",
404
+ "tfopt_for_causal_lm/model/decoder/layers.30/self_attn/q_proj/bias:0": "tf_model-00002-of-00003.h5",
405
+ "tfopt_for_causal_lm/model/decoder/layers.30/self_attn/q_proj/kernel:0": "tf_model-00002-of-00003.h5",
406
+ "tfopt_for_causal_lm/model/decoder/layers.30/self_attn/v_proj/bias:0": "tf_model-00002-of-00003.h5",
407
+ "tfopt_for_causal_lm/model/decoder/layers.30/self_attn/v_proj/kernel:0": "tf_model-00002-of-00003.h5",
408
+ "tfopt_for_causal_lm/model/decoder/layers.30/self_attn_layer_norm/beta:0": "tf_model-00002-of-00003.h5",
409
+ "tfopt_for_causal_lm/model/decoder/layers.30/self_attn_layer_norm/gamma:0": "tf_model-00002-of-00003.h5",
410
+ "tfopt_for_causal_lm/model/decoder/layers.31/fc1/bias:0": "tf_model-00003-of-00003.h5",
411
+ "tfopt_for_causal_lm/model/decoder/layers.31/fc1/kernel:0": "tf_model-00003-of-00003.h5",
412
+ "tfopt_for_causal_lm/model/decoder/layers.31/fc2/bias:0": "tf_model-00003-of-00003.h5",
413
+ "tfopt_for_causal_lm/model/decoder/layers.31/fc2/kernel:0": "tf_model-00003-of-00003.h5",
414
+ "tfopt_for_causal_lm/model/decoder/layers.31/final_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
415
+ "tfopt_for_causal_lm/model/decoder/layers.31/final_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
416
+ "tfopt_for_causal_lm/model/decoder/layers.31/self_attn/k_proj/bias:0": "tf_model-00003-of-00003.h5",
417
+ "tfopt_for_causal_lm/model/decoder/layers.31/self_attn/k_proj/kernel:0": "tf_model-00003-of-00003.h5",
418
+ "tfopt_for_causal_lm/model/decoder/layers.31/self_attn/out_proj/bias:0": "tf_model-00003-of-00003.h5",
419
+ "tfopt_for_causal_lm/model/decoder/layers.31/self_attn/out_proj/kernel:0": "tf_model-00003-of-00003.h5",
420
+ "tfopt_for_causal_lm/model/decoder/layers.31/self_attn/q_proj/bias:0": "tf_model-00003-of-00003.h5",
421
+ "tfopt_for_causal_lm/model/decoder/layers.31/self_attn/q_proj/kernel:0": "tf_model-00003-of-00003.h5",
422
+ "tfopt_for_causal_lm/model/decoder/layers.31/self_attn/v_proj/bias:0": "tf_model-00003-of-00003.h5",
423
+ "tfopt_for_causal_lm/model/decoder/layers.31/self_attn/v_proj/kernel:0": "tf_model-00003-of-00003.h5",
424
+ "tfopt_for_causal_lm/model/decoder/layers.31/self_attn_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
425
+ "tfopt_for_causal_lm/model/decoder/layers.31/self_attn_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
426
+ "tfopt_for_causal_lm/model/decoder/layers.32/fc1/bias:0": "tf_model-00003-of-00003.h5",
427
+ "tfopt_for_causal_lm/model/decoder/layers.32/fc1/kernel:0": "tf_model-00003-of-00003.h5",
428
+ "tfopt_for_causal_lm/model/decoder/layers.32/fc2/bias:0": "tf_model-00003-of-00003.h5",
429
+ "tfopt_for_causal_lm/model/decoder/layers.32/fc2/kernel:0": "tf_model-00003-of-00003.h5",
430
+ "tfopt_for_causal_lm/model/decoder/layers.32/final_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
431
+ "tfopt_for_causal_lm/model/decoder/layers.32/final_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
432
+ "tfopt_for_causal_lm/model/decoder/layers.32/self_attn/k_proj/bias:0": "tf_model-00003-of-00003.h5",
433
+ "tfopt_for_causal_lm/model/decoder/layers.32/self_attn/k_proj/kernel:0": "tf_model-00003-of-00003.h5",
434
+ "tfopt_for_causal_lm/model/decoder/layers.32/self_attn/out_proj/bias:0": "tf_model-00003-of-00003.h5",
435
+ "tfopt_for_causal_lm/model/decoder/layers.32/self_attn/out_proj/kernel:0": "tf_model-00003-of-00003.h5",
436
+ "tfopt_for_causal_lm/model/decoder/layers.32/self_attn/q_proj/bias:0": "tf_model-00003-of-00003.h5",
437
+ "tfopt_for_causal_lm/model/decoder/layers.32/self_attn/q_proj/kernel:0": "tf_model-00003-of-00003.h5",
438
+ "tfopt_for_causal_lm/model/decoder/layers.32/self_attn/v_proj/bias:0": "tf_model-00003-of-00003.h5",
439
+ "tfopt_for_causal_lm/model/decoder/layers.32/self_attn/v_proj/kernel:0": "tf_model-00003-of-00003.h5",
440
+ "tfopt_for_causal_lm/model/decoder/layers.32/self_attn_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
441
+ "tfopt_for_causal_lm/model/decoder/layers.32/self_attn_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
442
+ "tfopt_for_causal_lm/model/decoder/layers.33/fc1/bias:0": "tf_model-00003-of-00003.h5",
443
+ "tfopt_for_causal_lm/model/decoder/layers.33/fc1/kernel:0": "tf_model-00003-of-00003.h5",
444
+ "tfopt_for_causal_lm/model/decoder/layers.33/fc2/bias:0": "tf_model-00003-of-00003.h5",
445
+ "tfopt_for_causal_lm/model/decoder/layers.33/fc2/kernel:0": "tf_model-00003-of-00003.h5",
446
+ "tfopt_for_causal_lm/model/decoder/layers.33/final_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
447
+ "tfopt_for_causal_lm/model/decoder/layers.33/final_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
448
+ "tfopt_for_causal_lm/model/decoder/layers.33/self_attn/k_proj/bias:0": "tf_model-00003-of-00003.h5",
449
+ "tfopt_for_causal_lm/model/decoder/layers.33/self_attn/k_proj/kernel:0": "tf_model-00003-of-00003.h5",
450
+ "tfopt_for_causal_lm/model/decoder/layers.33/self_attn/out_proj/bias:0": "tf_model-00003-of-00003.h5",
451
+ "tfopt_for_causal_lm/model/decoder/layers.33/self_attn/out_proj/kernel:0": "tf_model-00003-of-00003.h5",
452
+ "tfopt_for_causal_lm/model/decoder/layers.33/self_attn/q_proj/bias:0": "tf_model-00003-of-00003.h5",
453
+ "tfopt_for_causal_lm/model/decoder/layers.33/self_attn/q_proj/kernel:0": "tf_model-00003-of-00003.h5",
454
+ "tfopt_for_causal_lm/model/decoder/layers.33/self_attn/v_proj/bias:0": "tf_model-00003-of-00003.h5",
455
+ "tfopt_for_causal_lm/model/decoder/layers.33/self_attn/v_proj/kernel:0": "tf_model-00003-of-00003.h5",
456
+ "tfopt_for_causal_lm/model/decoder/layers.33/self_attn_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
457
+ "tfopt_for_causal_lm/model/decoder/layers.33/self_attn_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
458
+ "tfopt_for_causal_lm/model/decoder/layers.34/fc1/bias:0": "tf_model-00003-of-00003.h5",
459
+ "tfopt_for_causal_lm/model/decoder/layers.34/fc1/kernel:0": "tf_model-00003-of-00003.h5",
460
+ "tfopt_for_causal_lm/model/decoder/layers.34/fc2/bias:0": "tf_model-00003-of-00003.h5",
461
+ "tfopt_for_causal_lm/model/decoder/layers.34/fc2/kernel:0": "tf_model-00003-of-00003.h5",
462
+ "tfopt_for_causal_lm/model/decoder/layers.34/final_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
463
+ "tfopt_for_causal_lm/model/decoder/layers.34/final_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
464
+ "tfopt_for_causal_lm/model/decoder/layers.34/self_attn/k_proj/bias:0": "tf_model-00003-of-00003.h5",
465
+ "tfopt_for_causal_lm/model/decoder/layers.34/self_attn/k_proj/kernel:0": "tf_model-00003-of-00003.h5",
466
+ "tfopt_for_causal_lm/model/decoder/layers.34/self_attn/out_proj/bias:0": "tf_model-00003-of-00003.h5",
467
+ "tfopt_for_causal_lm/model/decoder/layers.34/self_attn/out_proj/kernel:0": "tf_model-00003-of-00003.h5",
468
+ "tfopt_for_causal_lm/model/decoder/layers.34/self_attn/q_proj/bias:0": "tf_model-00003-of-00003.h5",
469
+ "tfopt_for_causal_lm/model/decoder/layers.34/self_attn/q_proj/kernel:0": "tf_model-00003-of-00003.h5",
470
+ "tfopt_for_causal_lm/model/decoder/layers.34/self_attn/v_proj/bias:0": "tf_model-00003-of-00003.h5",
471
+ "tfopt_for_causal_lm/model/decoder/layers.34/self_attn/v_proj/kernel:0": "tf_model-00003-of-00003.h5",
472
+ "tfopt_for_causal_lm/model/decoder/layers.34/self_attn_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
473
+ "tfopt_for_causal_lm/model/decoder/layers.34/self_attn_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
474
+ "tfopt_for_causal_lm/model/decoder/layers.35/fc1/bias:0": "tf_model-00003-of-00003.h5",
475
+ "tfopt_for_causal_lm/model/decoder/layers.35/fc1/kernel:0": "tf_model-00003-of-00003.h5",
476
+ "tfopt_for_causal_lm/model/decoder/layers.35/fc2/bias:0": "tf_model-00003-of-00003.h5",
477
+ "tfopt_for_causal_lm/model/decoder/layers.35/fc2/kernel:0": "tf_model-00003-of-00003.h5",
478
+ "tfopt_for_causal_lm/model/decoder/layers.35/final_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
479
+ "tfopt_for_causal_lm/model/decoder/layers.35/final_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
480
+ "tfopt_for_causal_lm/model/decoder/layers.35/self_attn/k_proj/bias:0": "tf_model-00003-of-00003.h5",
481
+ "tfopt_for_causal_lm/model/decoder/layers.35/self_attn/k_proj/kernel:0": "tf_model-00003-of-00003.h5",
482
+ "tfopt_for_causal_lm/model/decoder/layers.35/self_attn/out_proj/bias:0": "tf_model-00003-of-00003.h5",
483
+ "tfopt_for_causal_lm/model/decoder/layers.35/self_attn/out_proj/kernel:0": "tf_model-00003-of-00003.h5",
484
+ "tfopt_for_causal_lm/model/decoder/layers.35/self_attn/q_proj/bias:0": "tf_model-00003-of-00003.h5",
485
+ "tfopt_for_causal_lm/model/decoder/layers.35/self_attn/q_proj/kernel:0": "tf_model-00003-of-00003.h5",
486
+ "tfopt_for_causal_lm/model/decoder/layers.35/self_attn/v_proj/bias:0": "tf_model-00003-of-00003.h5",
487
+ "tfopt_for_causal_lm/model/decoder/layers.35/self_attn/v_proj/kernel:0": "tf_model-00003-of-00003.h5",
488
+ "tfopt_for_causal_lm/model/decoder/layers.35/self_attn_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
489
+ "tfopt_for_causal_lm/model/decoder/layers.35/self_attn_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
490
+ "tfopt_for_causal_lm/model/decoder/layers.36/fc1/bias:0": "tf_model-00003-of-00003.h5",
491
+ "tfopt_for_causal_lm/model/decoder/layers.36/fc1/kernel:0": "tf_model-00003-of-00003.h5",
492
+ "tfopt_for_causal_lm/model/decoder/layers.36/fc2/bias:0": "tf_model-00003-of-00003.h5",
493
+ "tfopt_for_causal_lm/model/decoder/layers.36/fc2/kernel:0": "tf_model-00003-of-00003.h5",
494
+ "tfopt_for_causal_lm/model/decoder/layers.36/final_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
495
+ "tfopt_for_causal_lm/model/decoder/layers.36/final_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
496
+ "tfopt_for_causal_lm/model/decoder/layers.36/self_attn/k_proj/bias:0": "tf_model-00003-of-00003.h5",
497
+ "tfopt_for_causal_lm/model/decoder/layers.36/self_attn/k_proj/kernel:0": "tf_model-00003-of-00003.h5",
498
+ "tfopt_for_causal_lm/model/decoder/layers.36/self_attn/out_proj/bias:0": "tf_model-00003-of-00003.h5",
499
+ "tfopt_for_causal_lm/model/decoder/layers.36/self_attn/out_proj/kernel:0": "tf_model-00003-of-00003.h5",
500
+ "tfopt_for_causal_lm/model/decoder/layers.36/self_attn/q_proj/bias:0": "tf_model-00003-of-00003.h5",
501
+ "tfopt_for_causal_lm/model/decoder/layers.36/self_attn/q_proj/kernel:0": "tf_model-00003-of-00003.h5",
502
+ "tfopt_for_causal_lm/model/decoder/layers.36/self_attn/v_proj/bias:0": "tf_model-00003-of-00003.h5",
503
+ "tfopt_for_causal_lm/model/decoder/layers.36/self_attn/v_proj/kernel:0": "tf_model-00003-of-00003.h5",
504
+ "tfopt_for_causal_lm/model/decoder/layers.36/self_attn_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
505
+ "tfopt_for_causal_lm/model/decoder/layers.36/self_attn_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
506
+ "tfopt_for_causal_lm/model/decoder/layers.37/fc1/bias:0": "tf_model-00003-of-00003.h5",
507
+ "tfopt_for_causal_lm/model/decoder/layers.37/fc1/kernel:0": "tf_model-00003-of-00003.h5",
508
+ "tfopt_for_causal_lm/model/decoder/layers.37/fc2/bias:0": "tf_model-00003-of-00003.h5",
509
+ "tfopt_for_causal_lm/model/decoder/layers.37/fc2/kernel:0": "tf_model-00003-of-00003.h5",
510
+ "tfopt_for_causal_lm/model/decoder/layers.37/final_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
511
+ "tfopt_for_causal_lm/model/decoder/layers.37/final_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
512
+ "tfopt_for_causal_lm/model/decoder/layers.37/self_attn/k_proj/bias:0": "tf_model-00003-of-00003.h5",
513
+ "tfopt_for_causal_lm/model/decoder/layers.37/self_attn/k_proj/kernel:0": "tf_model-00003-of-00003.h5",
514
+ "tfopt_for_causal_lm/model/decoder/layers.37/self_attn/out_proj/bias:0": "tf_model-00003-of-00003.h5",
515
+ "tfopt_for_causal_lm/model/decoder/layers.37/self_attn/out_proj/kernel:0": "tf_model-00003-of-00003.h5",
516
+ "tfopt_for_causal_lm/model/decoder/layers.37/self_attn/q_proj/bias:0": "tf_model-00003-of-00003.h5",
517
+ "tfopt_for_causal_lm/model/decoder/layers.37/self_attn/q_proj/kernel:0": "tf_model-00003-of-00003.h5",
518
+ "tfopt_for_causal_lm/model/decoder/layers.37/self_attn/v_proj/bias:0": "tf_model-00003-of-00003.h5",
519
+ "tfopt_for_causal_lm/model/decoder/layers.37/self_attn/v_proj/kernel:0": "tf_model-00003-of-00003.h5",
520
+ "tfopt_for_causal_lm/model/decoder/layers.37/self_attn_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
521
+ "tfopt_for_causal_lm/model/decoder/layers.37/self_attn_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
522
+ "tfopt_for_causal_lm/model/decoder/layers.38/fc1/bias:0": "tf_model-00003-of-00003.h5",
523
+ "tfopt_for_causal_lm/model/decoder/layers.38/fc1/kernel:0": "tf_model-00003-of-00003.h5",
524
+ "tfopt_for_causal_lm/model/decoder/layers.38/fc2/bias:0": "tf_model-00003-of-00003.h5",
525
+ "tfopt_for_causal_lm/model/decoder/layers.38/fc2/kernel:0": "tf_model-00003-of-00003.h5",
526
+ "tfopt_for_causal_lm/model/decoder/layers.38/final_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
527
+ "tfopt_for_causal_lm/model/decoder/layers.38/final_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
528
+ "tfopt_for_causal_lm/model/decoder/layers.38/self_attn/k_proj/bias:0": "tf_model-00003-of-00003.h5",
529
+ "tfopt_for_causal_lm/model/decoder/layers.38/self_attn/k_proj/kernel:0": "tf_model-00003-of-00003.h5",
530
+ "tfopt_for_causal_lm/model/decoder/layers.38/self_attn/out_proj/bias:0": "tf_model-00003-of-00003.h5",
531
+ "tfopt_for_causal_lm/model/decoder/layers.38/self_attn/out_proj/kernel:0": "tf_model-00003-of-00003.h5",
532
+ "tfopt_for_causal_lm/model/decoder/layers.38/self_attn/q_proj/bias:0": "tf_model-00003-of-00003.h5",
533
+ "tfopt_for_causal_lm/model/decoder/layers.38/self_attn/q_proj/kernel:0": "tf_model-00003-of-00003.h5",
534
+ "tfopt_for_causal_lm/model/decoder/layers.38/self_attn/v_proj/bias:0": "tf_model-00003-of-00003.h5",
535
+ "tfopt_for_causal_lm/model/decoder/layers.38/self_attn/v_proj/kernel:0": "tf_model-00003-of-00003.h5",
536
+ "tfopt_for_causal_lm/model/decoder/layers.38/self_attn_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
537
+ "tfopt_for_causal_lm/model/decoder/layers.38/self_attn_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
538
+ "tfopt_for_causal_lm/model/decoder/layers.39/fc1/bias:0": "tf_model-00003-of-00003.h5",
539
+ "tfopt_for_causal_lm/model/decoder/layers.39/fc1/kernel:0": "tf_model-00003-of-00003.h5",
540
+ "tfopt_for_causal_lm/model/decoder/layers.39/fc2/bias:0": "tf_model-00003-of-00003.h5",
541
+ "tfopt_for_causal_lm/model/decoder/layers.39/fc2/kernel:0": "tf_model-00003-of-00003.h5",
542
+ "tfopt_for_causal_lm/model/decoder/layers.39/final_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
543
+ "tfopt_for_causal_lm/model/decoder/layers.39/final_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
544
+ "tfopt_for_causal_lm/model/decoder/layers.39/self_attn/k_proj/bias:0": "tf_model-00003-of-00003.h5",
545
+ "tfopt_for_causal_lm/model/decoder/layers.39/self_attn/k_proj/kernel:0": "tf_model-00003-of-00003.h5",
546
+ "tfopt_for_causal_lm/model/decoder/layers.39/self_attn/out_proj/bias:0": "tf_model-00003-of-00003.h5",
547
+ "tfopt_for_causal_lm/model/decoder/layers.39/self_attn/out_proj/kernel:0": "tf_model-00003-of-00003.h5",
548
+ "tfopt_for_causal_lm/model/decoder/layers.39/self_attn/q_proj/bias:0": "tf_model-00003-of-00003.h5",
549
+ "tfopt_for_causal_lm/model/decoder/layers.39/self_attn/q_proj/kernel:0": "tf_model-00003-of-00003.h5",
550
+ "tfopt_for_causal_lm/model/decoder/layers.39/self_attn/v_proj/bias:0": "tf_model-00003-of-00003.h5",
551
+ "tfopt_for_causal_lm/model/decoder/layers.39/self_attn/v_proj/kernel:0": "tf_model-00003-of-00003.h5",
552
+ "tfopt_for_causal_lm/model/decoder/layers.39/self_attn_layer_norm/beta:0": "tf_model-00003-of-00003.h5",
553
+ "tfopt_for_causal_lm/model/decoder/layers.39/self_attn_layer_norm/gamma:0": "tf_model-00003-of-00003.h5",
554
+ "tfopt_for_causal_lm/model/decoder/layers.4/fc1/bias:0": "tf_model-00001-of-00003.h5",
555
+ "tfopt_for_causal_lm/model/decoder/layers.4/fc1/kernel:0": "tf_model-00001-of-00003.h5",
556
+ "tfopt_for_causal_lm/model/decoder/layers.4/fc2/bias:0": "tf_model-00001-of-00003.h5",
557
+ "tfopt_for_causal_lm/model/decoder/layers.4/fc2/kernel:0": "tf_model-00001-of-00003.h5",
558
+ "tfopt_for_causal_lm/model/decoder/layers.4/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
559
+ "tfopt_for_causal_lm/model/decoder/layers.4/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
560
+ "tfopt_for_causal_lm/model/decoder/layers.4/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
561
+ "tfopt_for_causal_lm/model/decoder/layers.4/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
562
+ "tfopt_for_causal_lm/model/decoder/layers.4/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
563
+ "tfopt_for_causal_lm/model/decoder/layers.4/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
564
+ "tfopt_for_causal_lm/model/decoder/layers.4/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
565
+ "tfopt_for_causal_lm/model/decoder/layers.4/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
566
+ "tfopt_for_causal_lm/model/decoder/layers.4/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
567
+ "tfopt_for_causal_lm/model/decoder/layers.4/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
568
+ "tfopt_for_causal_lm/model/decoder/layers.4/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
569
+ "tfopt_for_causal_lm/model/decoder/layers.4/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
570
+ "tfopt_for_causal_lm/model/decoder/layers.5/fc1/bias:0": "tf_model-00001-of-00003.h5",
571
+ "tfopt_for_causal_lm/model/decoder/layers.5/fc1/kernel:0": "tf_model-00001-of-00003.h5",
572
+ "tfopt_for_causal_lm/model/decoder/layers.5/fc2/bias:0": "tf_model-00001-of-00003.h5",
573
+ "tfopt_for_causal_lm/model/decoder/layers.5/fc2/kernel:0": "tf_model-00001-of-00003.h5",
574
+ "tfopt_for_causal_lm/model/decoder/layers.5/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
575
+ "tfopt_for_causal_lm/model/decoder/layers.5/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
576
+ "tfopt_for_causal_lm/model/decoder/layers.5/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
577
+ "tfopt_for_causal_lm/model/decoder/layers.5/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
578
+ "tfopt_for_causal_lm/model/decoder/layers.5/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
579
+ "tfopt_for_causal_lm/model/decoder/layers.5/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
580
+ "tfopt_for_causal_lm/model/decoder/layers.5/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
581
+ "tfopt_for_causal_lm/model/decoder/layers.5/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
582
+ "tfopt_for_causal_lm/model/decoder/layers.5/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
583
+ "tfopt_for_causal_lm/model/decoder/layers.5/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
584
+ "tfopt_for_causal_lm/model/decoder/layers.5/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
585
+ "tfopt_for_causal_lm/model/decoder/layers.5/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
586
+ "tfopt_for_causal_lm/model/decoder/layers.6/fc1/bias:0": "tf_model-00001-of-00003.h5",
587
+ "tfopt_for_causal_lm/model/decoder/layers.6/fc1/kernel:0": "tf_model-00001-of-00003.h5",
588
+ "tfopt_for_causal_lm/model/decoder/layers.6/fc2/bias:0": "tf_model-00001-of-00003.h5",
589
+ "tfopt_for_causal_lm/model/decoder/layers.6/fc2/kernel:0": "tf_model-00001-of-00003.h5",
590
+ "tfopt_for_causal_lm/model/decoder/layers.6/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
591
+ "tfopt_for_causal_lm/model/decoder/layers.6/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
592
+ "tfopt_for_causal_lm/model/decoder/layers.6/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
593
+ "tfopt_for_causal_lm/model/decoder/layers.6/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
594
+ "tfopt_for_causal_lm/model/decoder/layers.6/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
595
+ "tfopt_for_causal_lm/model/decoder/layers.6/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
596
+ "tfopt_for_causal_lm/model/decoder/layers.6/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
597
+ "tfopt_for_causal_lm/model/decoder/layers.6/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
598
+ "tfopt_for_causal_lm/model/decoder/layers.6/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
599
+ "tfopt_for_causal_lm/model/decoder/layers.6/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
600
+ "tfopt_for_causal_lm/model/decoder/layers.6/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
601
+ "tfopt_for_causal_lm/model/decoder/layers.6/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
602
+ "tfopt_for_causal_lm/model/decoder/layers.7/fc1/bias:0": "tf_model-00001-of-00003.h5",
603
+ "tfopt_for_causal_lm/model/decoder/layers.7/fc1/kernel:0": "tf_model-00001-of-00003.h5",
604
+ "tfopt_for_causal_lm/model/decoder/layers.7/fc2/bias:0": "tf_model-00001-of-00003.h5",
605
+ "tfopt_for_causal_lm/model/decoder/layers.7/fc2/kernel:0": "tf_model-00001-of-00003.h5",
606
+ "tfopt_for_causal_lm/model/decoder/layers.7/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
607
+ "tfopt_for_causal_lm/model/decoder/layers.7/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
608
+ "tfopt_for_causal_lm/model/decoder/layers.7/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
609
+ "tfopt_for_causal_lm/model/decoder/layers.7/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
610
+ "tfopt_for_causal_lm/model/decoder/layers.7/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
611
+ "tfopt_for_causal_lm/model/decoder/layers.7/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
612
+ "tfopt_for_causal_lm/model/decoder/layers.7/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
613
+ "tfopt_for_causal_lm/model/decoder/layers.7/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
614
+ "tfopt_for_causal_lm/model/decoder/layers.7/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
615
+ "tfopt_for_causal_lm/model/decoder/layers.7/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
616
+ "tfopt_for_causal_lm/model/decoder/layers.7/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
617
+ "tfopt_for_causal_lm/model/decoder/layers.7/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
618
+ "tfopt_for_causal_lm/model/decoder/layers.8/fc1/bias:0": "tf_model-00001-of-00003.h5",
619
+ "tfopt_for_causal_lm/model/decoder/layers.8/fc1/kernel:0": "tf_model-00001-of-00003.h5",
620
+ "tfopt_for_causal_lm/model/decoder/layers.8/fc2/bias:0": "tf_model-00001-of-00003.h5",
621
+ "tfopt_for_causal_lm/model/decoder/layers.8/fc2/kernel:0": "tf_model-00001-of-00003.h5",
622
+ "tfopt_for_causal_lm/model/decoder/layers.8/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
623
+ "tfopt_for_causal_lm/model/decoder/layers.8/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
624
+ "tfopt_for_causal_lm/model/decoder/layers.8/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
625
+ "tfopt_for_causal_lm/model/decoder/layers.8/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
626
+ "tfopt_for_causal_lm/model/decoder/layers.8/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
627
+ "tfopt_for_causal_lm/model/decoder/layers.8/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
628
+ "tfopt_for_causal_lm/model/decoder/layers.8/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
629
+ "tfopt_for_causal_lm/model/decoder/layers.8/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
630
+ "tfopt_for_causal_lm/model/decoder/layers.8/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
631
+ "tfopt_for_causal_lm/model/decoder/layers.8/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
632
+ "tfopt_for_causal_lm/model/decoder/layers.8/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
633
+ "tfopt_for_causal_lm/model/decoder/layers.8/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
634
+ "tfopt_for_causal_lm/model/decoder/layers.9/fc1/bias:0": "tf_model-00001-of-00003.h5",
635
+ "tfopt_for_causal_lm/model/decoder/layers.9/fc1/kernel:0": "tf_model-00001-of-00003.h5",
636
+ "tfopt_for_causal_lm/model/decoder/layers.9/fc2/bias:0": "tf_model-00001-of-00003.h5",
637
+ "tfopt_for_causal_lm/model/decoder/layers.9/fc2/kernel:0": "tf_model-00001-of-00003.h5",
638
+ "tfopt_for_causal_lm/model/decoder/layers.9/final_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
639
+ "tfopt_for_causal_lm/model/decoder/layers.9/final_layer_norm/gamma:0": "tf_model-00001-of-00003.h5",
640
+ "tfopt_for_causal_lm/model/decoder/layers.9/self_attn/k_proj/bias:0": "tf_model-00001-of-00003.h5",
641
+ "tfopt_for_causal_lm/model/decoder/layers.9/self_attn/k_proj/kernel:0": "tf_model-00001-of-00003.h5",
642
+ "tfopt_for_causal_lm/model/decoder/layers.9/self_attn/out_proj/bias:0": "tf_model-00001-of-00003.h5",
643
+ "tfopt_for_causal_lm/model/decoder/layers.9/self_attn/out_proj/kernel:0": "tf_model-00001-of-00003.h5",
644
+ "tfopt_for_causal_lm/model/decoder/layers.9/self_attn/q_proj/bias:0": "tf_model-00001-of-00003.h5",
645
+ "tfopt_for_causal_lm/model/decoder/layers.9/self_attn/q_proj/kernel:0": "tf_model-00001-of-00003.h5",
646
+ "tfopt_for_causal_lm/model/decoder/layers.9/self_attn/v_proj/bias:0": "tf_model-00001-of-00003.h5",
647
+ "tfopt_for_causal_lm/model/decoder/layers.9/self_attn/v_proj/kernel:0": "tf_model-00001-of-00003.h5",
648
+ "tfopt_for_causal_lm/model/decoder/layers.9/self_attn_layer_norm/beta:0": "tf_model-00001-of-00003.h5",
649
+ "tfopt_for_causal_lm/model/decoder/layers.9/self_attn_layer_norm/gamma:0": "tf_model-00001-of-00003.h5"
650
+ }
651
+ }
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"errors": "replace", "unk_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "add_bos_token": true, "special_tokens_map_file": null, "name_or_path": "patrickvonplaten/opt-30b", "tokenizer_class": "GPT2Tokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff