matchaaaaa commited on
Commit
15fc2b8
1 Parent(s): c5bde6a

Upload 23 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ chaifighter-v3-cute.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,130 @@
1
  ---
2
- license: cc-by-nc-4.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: []
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
  ---
8
+
9
+ ![cute](https://huggingface.co/matchaaaaa/Chaifighter-v3-20B/resolve/main/chaifighter-v3-cute.png)
10
+
11
+ # Chaifighter-v3-20B
12
+
13
+ Meet Chaifighter-v3! A flagship frankenmerge blend brewed with love by yours truly!
14
+
15
+ Chaifighter-v3 brings back the hyper-attention of the original, vastly expands on the fixes used to make v2 usable, and is built on a modified [Chunky-Lemon-Cookie-11B](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B). Moreover, it RoPEs up to 8K perfectly, and should work well at 12K and beyond.
16
+
17
+ *Native Context Length: 4K/4096 (can be extended to 8K/8192 or more with RoPE)*
18
+
19
+ ## Prompt Template: Alpaca/Alpaca-based
20
+
21
+ ```
22
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
23
+
24
+ ### Instruction:
25
+ {prompt}
26
+
27
+ ### Response:
28
+ ```
29
+
30
+ ## Recommended Settings: Universal-Light
31
+
32
+ Here are some setting ranges that tend to work for my models. I used these when testing, and they're pretty safe bets. Feel free to tweak according to taste or do whatever you want (but maybe it might maybe break, maybe).
33
+
34
+ * Temperature: **1.0** to **1.25**
35
+ * Min-P: **0.05** to **0.1**
36
+ * Repetition Penalty: **1.05** *to* **1.1**
37
+ * Rep. Penalty Range: **256** *or* **512**
38
+ * *(all other samplers disabled)*
39
+
40
+ ## The Deets
41
+
42
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
43
+
44
+ ### Merge Method
45
+
46
+ This model was merged using the passthrough merge method.
47
+
48
+ ### Models Merged
49
+
50
+ The following models were included in the merge:
51
+
52
+ * [Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
53
+ * Pop-Taro-11B, *a variation of [Chunky-Lemon-Cookie-11B](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B)*
54
+ * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
55
+ * [crestf411/daybreak-kunoichi-2dpo-7b](https://huggingface.co/crestf411/daybreak-kunoichi-2dpo-7b)
56
+ * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
57
+ * [Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
58
+ * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
59
+ * [Undi95/Mistral-11B-OmniMix-pippa-sharegpt-11b-qlora](https://huggingface.co/Undi95/Mistral-11B-OmniMix-pippa-sharegpt-11b-qlora)
60
+
61
+ ### The Special Sauce
62
+
63
+ The following YAML configuration was used to produce this model:
64
+
65
+ ```yaml
66
+ slices: # modified Big-Lemon-Cookie recipe
67
+ - sources:
68
+ - model: SanjiWatsuki/Kunoichi-7B
69
+ layer_range: [0, 24]
70
+ - sources:
71
+ - model: crestf411/daybreak-kunoichi-2dpo-7b # this was Silicon-Maid in the OG
72
+ layer_range: [8, 24]
73
+ - sources:
74
+ - model: KatyTheCutie/LemonadeRP-4.5.3
75
+ layer_range: [24, 32]
76
+ merge_method: passthrough
77
+ dtype: float32
78
+ name: pre-Taro-11B
79
+ ---
80
+ models: # this is what FallenMerick did for the Chunky-Lemon-Cookie
81
+ - model: pre-Taro-11B
82
+ parameters:
83
+ weight: 0.85
84
+ - model: Sao10K/Fimbulvetr-11B-v2
85
+ parameters:
86
+ weight: 0.15
87
+ merge_method: linear
88
+ dtype: float32
89
+ name: Taro-11B
90
+ ---
91
+ models: # further healing with PEFT qLoRA, thanks undi
92
+ - model: Taro-11B
93
+ parameters:
94
+ weight: 0.68 # these values were a good balance
95
+ - model: Taro-11B+Undi95/Mistral-11B-OmniMix-pippa-sharegpt-11b-qlora # picked PIPPA because I'm old school
96
+ parameters:
97
+ weight: 0.32 # good balance pt. 2
98
+ merge_method: linear
99
+ dtype: float32
100
+ name: Pop-Taro-11B
101
+ ---
102
+ slices: # this is the really cursed part
103
+ - sources:
104
+ - model: Sao10K/Fimbulvetr-11B-v2
105
+ layer_range: [0, 40]
106
+ - sources:
107
+ - model: Pop-Taro-11B # probably will release this later especially if it's good on its own and there's interest for it
108
+ layer_range: [0, 48] # includes the first 8 layers to boost attention, why does it worK???
109
+ merge_method: passthrough
110
+ dtype: float32
111
+ name: Chaifighter-v3-20B
112
+ ```
113
+
114
+ All merging was done at float32 precision to minimize quality loss.
115
+
116
+ ### The Thought Process
117
+ **Alternate title: "Input Layers Placed Halfway Through Your Frankenmerge Is All You Need"**
118
+
119
+ Note: much of this is conjecture. Thanks to [@ToastyPigeon](https://huggingface.co/ToastyPigeon) and the "Jeb's mad science 11B and 16B" thread on the Kobold discord. Without them, my understanding of this model would be much, much worse. Their help and insights were crucial in making this model happen!
120
+
121
+ This model started with the original recipe. According to everything my friends and I know, it just shouldn't have worked nearly as well as it did. I wondered what it would take to make it work, and as it turns out, it was the repeated Mistral "output layers" (meaning, the last 8 or so hidden layers) that caused most of the model's trouble. There was still stack damage, though. Essentially, this is a 7B base model expanded to 19.5B parameters. If that sounds like a lot, that's because it is a lot.
122
+ One of the core reasons it works, we believe, is because of [Fimbulvetr-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) being based on [SOLAR 10.7B](https://huggingface.co/upstage/SOLAR-10.7B-v1.0). When SOLAR was being made, it received finetuning after being stacked up to 48 layers to heal the "stack damage". We think that this finetuning helped differentiate the layers enough from Mistral for the "input layers" (the first 8 or so hidden layers) for the whole model to actually function. Jeb's mad lads did a lot of testing and have concluded that one of the countless ways to break a model is to repeat these "input layers", and well, apparently (evidently) SOLAR somehow allows this cursedness to work.
123
+ As a side note, [Fimbulvetr-v2.1-16K](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2.1-16K) was also tested in this merge. For some reason, it just wasn't happy there, and it caused all kinds of problems and a first-person bias (which I thought might be annoying to most people).
124
+ [Kunoichi](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) has always been one of my favorites because of how great it is at prompt-following and awareness. That's why its second set of "input layers" are chosen. [@Kromeurus](https://huggingface.co/kromeurus) recommended [daybreak-kunoichi-2dpo-7b](https://huggingface.co/crestf411/daybreak-kunoichi-2dpo-7b), which was trained on the [Storyteller's Diamond Law](https://files.catbox.moe/d15m3g.txt) and in theory should increase the model's knowledge a little. [LemonadeRP-4.5.3](KatyTheCutie/LemonadeRP-4.5.3) is a solid performer as well, being part of [@FallenMerick's](https://huggingface.co/FallenMerick) [Chunky-Lemon-Cookie-11B](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B) and by extension, [Honey-Yuzu-13B](https://huggingface.co/matchaaaaa/Honey-Yuzu-13B) (by me). It was also part of Chaifighter-v2's recipe, and as such, v3's writing should be familiar (in a good way :skull:) for those who liked v2.
125
+ Finally, this second stack of models was merged with Fimbulvetr-v2 to help heal the stack damage. In theory, this helps "smoothen" the layers together and make the model more put together overall. I went further with this idea by using [a PIPPA qLoRA trained for a Mistral 11B DUS stack called OmniMix](https://huggingface.co/Undi95/Mistral-11B-OmniMix-pippa-sharegpt-11b-qlora). I played with the values to ensure the end result was sufficiently stable without being overpowered by PIPPA.
126
+ There was a lot of trial and error involved in the creation of this model. Additionally, many many great minds helped shape this model. Thank you very much to everyone who kindly gave feedback, encouragement, or helped in any other way, big or small, with the development of this model and any past model. I really, really appreciate it. <3
127
+
128
+ And thank YOU for taking the time to read this and for checking out my model!
129
+
130
+ Have feedback? Comments? Questions? Don't hesitate to let me know! As always, have a fantastice day, and remember to take care of yourself! :)
chaifighter-v3-cute.png ADDED

Git LFS Details

  • SHA256: 72a3132fabe547fc95701fba139c013364936e0e60ee7b876282e3cff8efab48
  • Pointer size: 132 Bytes
  • Size of remote file: 2 MB
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "matchaaaaa/Chaifighter-v3-20B",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 4096,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 14336,
14
+ "max_position_embeddings": 4096,
15
+ "mlp_bias": false,
16
+ "model_type": "llama",
17
+ "num_attention_heads": 32,
18
+ "num_hidden_layers": 88,
19
+ "num_key_value_heads": 8,
20
+ "pretraining_tp": 1,
21
+ "rms_norm_eps": 1e-05,
22
+ "rope_scaling": null,
23
+ "rope_theta": 10000.0,
24
+ "tie_word_embeddings": false,
25
+ "torch_dtype": "float32",
26
+ "transformers_version": "4.45.0.dev0",
27
+ "use_cache": false,
28
+ "vocab_size": 32000
29
+ }
model-00001-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9c10b753196df7681a6c4eb457cb3556698720d4d2e34fff7ca6f92f6b643b7
3
+ size 4773286512
model-00002-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ea9d7b974cead1f49b46e7e1fe4236343eed0961b5b7b52fb7a3a11fc6fe1bd
3
+ size 4999813128
model-00003-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e9d02fd5a0e3975f7cd95d7d6be5732f5498a8cd3f66374782154f2df3252d9
3
+ size 4999779672
model-00004-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cecc641bcb054a3e5cc74816336fe08bea88f5bf37e5c1873ee4bf44301d752
3
+ size 4899116472
model-00005-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd9ec9339c6aa94e2e9e19df1933402f43e6098e8841ba612e19b1993d7c5498
3
+ size 4999813120
model-00006-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99227679630e4d24967f54aecc8198d905f6506dd1931ca096f646be98ed59a2
3
+ size 4999813128
model-00007-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b1842bf17101782c1e3de43b162a7a370bf08f166931f49eb38b95d5cb99434
3
+ size 4999813128
model-00008-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f1b4435aa0fefa46e93aedbfcbaa01e2502bab9048875ccf864f8922d4608cf
3
+ size 4999813128
model-00009-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ba0b3f57816a759d6a4a2e80ec997bd17c7831a22867ff105d7691fcfc66fe0
3
+ size 4999813120
model-00010-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e57870100c6468f22e21d502c31e2e84339d123ef1b13fa70c29354186b10555
3
+ size 4999779672
model-00011-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cdd95e809c3051e31607605baffe403bcc4ea767f6e9638bbb31fc85ee1f6b6
3
+ size 4899116472
model-00012-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a96393f2ecd008990deaa2278942e3cc065fa5f8896f06eb69b3efcb3ca5e13
3
+ size 4999813128
model-00013-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18e9ba7aa2a3e804916c31abf65b7a085443187ad140c0b8cd76fed4cf8e8b36
3
+ size 4999813128
model-00014-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0191022024a19f62854475d902285fbe9f7af21a0691d584a7adc91731f9ea0
3
+ size 4999813112
model-00015-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92966ecb88a4525b8f866709699b603262155da94db61bcaf152770bdde04ed8
3
+ size 4999813104
model-00016-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e73fd9efeb79bc0daf9557fd32a8814cbd77f5e7b4813a3695b7c1dab29f4d9c
3
+ size 3254898512
model.safetensors.index.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata": {"mergekit_version": "0.0.4.4", "total_size": 77824016384}, "weight_map": {"lm_head.weight": "model-00001-of-00016.safetensors", "model.embed_tokens.weight": "model-00001-of-00016.safetensors", "model.layers.40.input_layernorm.weight": "model-00001-of-00016.safetensors", "model.layers.0.input_layernorm.weight": "model-00001-of-00016.safetensors", "model.layers.40.mlp.down_proj.weight": "model-00001-of-00016.safetensors", "model.layers.0.mlp.down_proj.weight": "model-00001-of-00016.safetensors", "model.layers.40.mlp.gate_proj.weight": "model-00001-of-00016.safetensors", "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00016.safetensors", "model.layers.40.mlp.up_proj.weight": "model-00001-of-00016.safetensors", "model.layers.0.mlp.up_proj.weight": "model-00001-of-00016.safetensors", "model.layers.40.post_attention_layernorm.weight": "model-00001-of-00016.safetensors", "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00016.safetensors", "model.layers.40.self_attn.k_proj.weight": "model-00001-of-00016.safetensors", "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00016.safetensors", "model.layers.40.self_attn.o_proj.weight": "model-00001-of-00016.safetensors", "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00016.safetensors", "model.layers.40.self_attn.q_proj.weight": "model-00001-of-00016.safetensors", "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00016.safetensors", "model.layers.40.self_attn.v_proj.weight": "model-00001-of-00016.safetensors", "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00016.safetensors", "model.layers.41.input_layernorm.weight": "model-00001-of-00016.safetensors", "model.layers.1.input_layernorm.weight": "model-00001-of-00016.safetensors", "model.layers.41.mlp.down_proj.weight": "model-00001-of-00016.safetensors", "model.layers.1.mlp.down_proj.weight": "model-00001-of-00016.safetensors", "model.layers.41.mlp.gate_proj.weight": "model-00001-of-00016.safetensors", "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00016.safetensors", "model.layers.41.mlp.up_proj.weight": "model-00001-of-00016.safetensors", "model.layers.1.mlp.up_proj.weight": "model-00001-of-00016.safetensors", "model.layers.41.post_attention_layernorm.weight": "model-00001-of-00016.safetensors", "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00016.safetensors", "model.layers.41.self_attn.k_proj.weight": "model-00001-of-00016.safetensors", "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00016.safetensors", "model.layers.41.self_attn.o_proj.weight": "model-00001-of-00016.safetensors", "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00016.safetensors", "model.layers.41.self_attn.q_proj.weight": "model-00001-of-00016.safetensors", "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00016.safetensors", "model.layers.41.self_attn.v_proj.weight": "model-00001-of-00016.safetensors", "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00016.safetensors", "model.layers.50.input_layernorm.weight": "model-00001-of-00016.safetensors", "model.layers.10.input_layernorm.weight": "model-00001-of-00016.safetensors", "model.layers.50.mlp.down_proj.weight": "model-00001-of-00016.safetensors", "model.layers.10.mlp.down_proj.weight": "model-00002-of-00016.safetensors", "model.layers.50.mlp.gate_proj.weight": "model-00002-of-00016.safetensors", "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00016.safetensors", "model.layers.50.mlp.up_proj.weight": "model-00002-of-00016.safetensors", "model.layers.10.mlp.up_proj.weight": "model-00002-of-00016.safetensors", "model.layers.50.post_attention_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.50.self_attn.k_proj.weight": "model-00002-of-00016.safetensors", "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00016.safetensors", "model.layers.50.self_attn.o_proj.weight": "model-00002-of-00016.safetensors", "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00016.safetensors", "model.layers.50.self_attn.q_proj.weight": "model-00002-of-00016.safetensors", "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00016.safetensors", "model.layers.50.self_attn.v_proj.weight": "model-00002-of-00016.safetensors", "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00016.safetensors", "model.layers.51.input_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.11.input_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.51.mlp.down_proj.weight": "model-00002-of-00016.safetensors", "model.layers.11.mlp.down_proj.weight": "model-00002-of-00016.safetensors", "model.layers.51.mlp.gate_proj.weight": "model-00002-of-00016.safetensors", "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00016.safetensors", "model.layers.51.mlp.up_proj.weight": "model-00002-of-00016.safetensors", "model.layers.11.mlp.up_proj.weight": "model-00002-of-00016.safetensors", "model.layers.51.post_attention_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.51.self_attn.k_proj.weight": "model-00002-of-00016.safetensors", "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00016.safetensors", "model.layers.51.self_attn.o_proj.weight": "model-00002-of-00016.safetensors", "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00016.safetensors", "model.layers.51.self_attn.q_proj.weight": "model-00002-of-00016.safetensors", "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00016.safetensors", "model.layers.51.self_attn.v_proj.weight": "model-00002-of-00016.safetensors", "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00016.safetensors", "model.layers.52.input_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.12.input_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.52.mlp.down_proj.weight": "model-00002-of-00016.safetensors", "model.layers.12.mlp.down_proj.weight": "model-00002-of-00016.safetensors", "model.layers.52.mlp.gate_proj.weight": "model-00002-of-00016.safetensors", "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00016.safetensors", "model.layers.52.mlp.up_proj.weight": "model-00002-of-00016.safetensors", "model.layers.12.mlp.up_proj.weight": "model-00002-of-00016.safetensors", "model.layers.52.post_attention_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.52.self_attn.k_proj.weight": "model-00002-of-00016.safetensors", "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00016.safetensors", "model.layers.52.self_attn.o_proj.weight": "model-00002-of-00016.safetensors", "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00016.safetensors", "model.layers.52.self_attn.q_proj.weight": "model-00002-of-00016.safetensors", "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00016.safetensors", "model.layers.52.self_attn.v_proj.weight": "model-00002-of-00016.safetensors", "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00016.safetensors", "model.layers.53.input_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.13.input_layernorm.weight": "model-00002-of-00016.safetensors", "model.layers.53.mlp.down_proj.weight": "model-00003-of-00016.safetensors", "model.layers.13.mlp.down_proj.weight": "model-00003-of-00016.safetensors", "model.layers.53.mlp.gate_proj.weight": "model-00003-of-00016.safetensors", "model.layers.13.mlp.gate_proj.weight": "model-00003-of-00016.safetensors", "model.layers.53.mlp.up_proj.weight": "model-00003-of-00016.safetensors", "model.layers.13.mlp.up_proj.weight": "model-00003-of-00016.safetensors", "model.layers.53.post_attention_layernorm.weight": "model-00003-of-00016.safetensors", "model.layers.13.post_attention_layernorm.weight": "model-00003-of-00016.safetensors", "model.layers.53.self_attn.k_proj.weight": "model-00003-of-00016.safetensors", "model.layers.13.self_attn.k_proj.weight": "model-00003-of-00016.safetensors", "model.layers.53.self_attn.o_proj.weight": "model-00003-of-00016.safetensors", "model.layers.13.self_attn.o_proj.weight": "model-00003-of-00016.safetensors", "model.layers.53.self_attn.q_proj.weight": "model-00003-of-00016.safetensors", "model.layers.13.self_attn.q_proj.weight": "model-00003-of-00016.safetensors", "model.layers.53.self_attn.v_proj.weight": "model-00003-of-00016.safetensors", "model.layers.13.self_attn.v_proj.weight": "model-00003-of-00016.safetensors", "model.layers.54.input_layernorm.weight": "model-00003-of-00016.safetensors", "model.layers.14.input_layernorm.weight": "model-00003-of-00016.safetensors", "model.layers.54.mlp.down_proj.weight": "model-00003-of-00016.safetensors", "model.layers.14.mlp.down_proj.weight": "model-00003-of-00016.safetensors", "model.layers.54.mlp.gate_proj.weight": "model-00003-of-00016.safetensors", "model.layers.14.mlp.gate_proj.weight": "model-00003-of-00016.safetensors", "model.layers.54.mlp.up_proj.weight": "model-00003-of-00016.safetensors", "model.layers.14.mlp.up_proj.weight": "model-00003-of-00016.safetensors", "model.layers.54.post_attention_layernorm.weight": "model-00003-of-00016.safetensors", "model.layers.14.post_attention_layernorm.weight": "model-00003-of-00016.safetensors", "model.layers.54.self_attn.k_proj.weight": "model-00003-of-00016.safetensors", "model.layers.14.self_attn.k_proj.weight": "model-00003-of-00016.safetensors", "model.layers.54.self_attn.o_proj.weight": "model-00003-of-00016.safetensors", "model.layers.14.self_attn.o_proj.weight": "model-00003-of-00016.safetensors", "model.layers.54.self_attn.q_proj.weight": "model-00003-of-00016.safetensors", "model.layers.14.self_attn.q_proj.weight": "model-00003-of-00016.safetensors", "model.layers.54.self_attn.v_proj.weight": "model-00003-of-00016.safetensors", "model.layers.14.self_attn.v_proj.weight": "model-00003-of-00016.safetensors", "model.layers.55.input_layernorm.weight": "model-00003-of-00016.safetensors", "model.layers.15.input_layernorm.weight": "model-00003-of-00016.safetensors", "model.layers.55.mlp.down_proj.weight": "model-00003-of-00016.safetensors", "model.layers.15.mlp.down_proj.weight": "model-00003-of-00016.safetensors", "model.layers.55.mlp.gate_proj.weight": "model-00003-of-00016.safetensors", "model.layers.15.mlp.gate_proj.weight": "model-00003-of-00016.safetensors", "model.layers.55.mlp.up_proj.weight": "model-00003-of-00016.safetensors", "model.layers.15.mlp.up_proj.weight": "model-00003-of-00016.safetensors", "model.layers.55.post_attention_layernorm.weight": "model-00003-of-00016.safetensors", "model.layers.15.post_attention_layernorm.weight": "model-00003-of-00016.safetensors", "model.layers.55.self_attn.k_proj.weight": "model-00003-of-00016.safetensors", "model.layers.15.self_attn.k_proj.weight": "model-00003-of-00016.safetensors", "model.layers.55.self_attn.o_proj.weight": "model-00003-of-00016.safetensors", "model.layers.15.self_attn.o_proj.weight": "model-00004-of-00016.safetensors", "model.layers.55.self_attn.q_proj.weight": "model-00004-of-00016.safetensors", "model.layers.15.self_attn.q_proj.weight": "model-00004-of-00016.safetensors", "model.layers.55.self_attn.v_proj.weight": "model-00004-of-00016.safetensors", "model.layers.15.self_attn.v_proj.weight": "model-00004-of-00016.safetensors", "model.layers.56.input_layernorm.weight": "model-00004-of-00016.safetensors", "model.layers.16.input_layernorm.weight": "model-00004-of-00016.safetensors", "model.layers.56.mlp.down_proj.weight": "model-00004-of-00016.safetensors", "model.layers.16.mlp.down_proj.weight": "model-00004-of-00016.safetensors", "model.layers.56.mlp.gate_proj.weight": "model-00004-of-00016.safetensors", "model.layers.16.mlp.gate_proj.weight": "model-00004-of-00016.safetensors", "model.layers.56.mlp.up_proj.weight": "model-00004-of-00016.safetensors", "model.layers.16.mlp.up_proj.weight": "model-00004-of-00016.safetensors", "model.layers.56.post_attention_layernorm.weight": "model-00004-of-00016.safetensors", "model.layers.16.post_attention_layernorm.weight": "model-00004-of-00016.safetensors", "model.layers.56.self_attn.k_proj.weight": "model-00004-of-00016.safetensors", "model.layers.16.self_attn.k_proj.weight": "model-00004-of-00016.safetensors", "model.layers.56.self_attn.o_proj.weight": "model-00004-of-00016.safetensors", "model.layers.16.self_attn.o_proj.weight": "model-00004-of-00016.safetensors", "model.layers.56.self_attn.q_proj.weight": "model-00004-of-00016.safetensors", "model.layers.16.self_attn.q_proj.weight": "model-00004-of-00016.safetensors", "model.layers.56.self_attn.v_proj.weight": "model-00004-of-00016.safetensors", "model.layers.16.self_attn.v_proj.weight": "model-00004-of-00016.safetensors", "model.layers.57.input_layernorm.weight": "model-00004-of-00016.safetensors", "model.layers.17.input_layernorm.weight": "model-00004-of-00016.safetensors", "model.layers.57.mlp.down_proj.weight": "model-00004-of-00016.safetensors", "model.layers.17.mlp.down_proj.weight": "model-00004-of-00016.safetensors", "model.layers.57.mlp.gate_proj.weight": "model-00004-of-00016.safetensors", "model.layers.17.mlp.gate_proj.weight": "model-00004-of-00016.safetensors", "model.layers.57.mlp.up_proj.weight": "model-00004-of-00016.safetensors", "model.layers.17.mlp.up_proj.weight": "model-00004-of-00016.safetensors", "model.layers.57.post_attention_layernorm.weight": "model-00004-of-00016.safetensors", "model.layers.17.post_attention_layernorm.weight": "model-00004-of-00016.safetensors", "model.layers.57.self_attn.k_proj.weight": "model-00004-of-00016.safetensors", "model.layers.17.self_attn.k_proj.weight": "model-00004-of-00016.safetensors", "model.layers.57.self_attn.o_proj.weight": "model-00004-of-00016.safetensors", "model.layers.17.self_attn.o_proj.weight": "model-00004-of-00016.safetensors", "model.layers.57.self_attn.q_proj.weight": "model-00004-of-00016.safetensors", "model.layers.17.self_attn.q_proj.weight": "model-00004-of-00016.safetensors", "model.layers.57.self_attn.v_proj.weight": "model-00004-of-00016.safetensors", "model.layers.17.self_attn.v_proj.weight": "model-00004-of-00016.safetensors", "model.layers.58.input_layernorm.weight": "model-00004-of-00016.safetensors", "model.layers.18.input_layernorm.weight": "model-00004-of-00016.safetensors", "model.layers.58.mlp.down_proj.weight": "model-00004-of-00016.safetensors", "model.layers.18.mlp.down_proj.weight": "model-00004-of-00016.safetensors", "model.layers.58.mlp.gate_proj.weight": "model-00004-of-00016.safetensors", "model.layers.18.mlp.gate_proj.weight": "model-00004-of-00016.safetensors", "model.layers.58.mlp.up_proj.weight": "model-00004-of-00016.safetensors", "model.layers.18.mlp.up_proj.weight": "model-00005-of-00016.safetensors", "model.layers.58.post_attention_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.18.post_attention_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.58.self_attn.k_proj.weight": "model-00005-of-00016.safetensors", "model.layers.18.self_attn.k_proj.weight": "model-00005-of-00016.safetensors", "model.layers.58.self_attn.o_proj.weight": "model-00005-of-00016.safetensors", "model.layers.18.self_attn.o_proj.weight": "model-00005-of-00016.safetensors", "model.layers.58.self_attn.q_proj.weight": "model-00005-of-00016.safetensors", "model.layers.18.self_attn.q_proj.weight": "model-00005-of-00016.safetensors", "model.layers.58.self_attn.v_proj.weight": "model-00005-of-00016.safetensors", "model.layers.18.self_attn.v_proj.weight": "model-00005-of-00016.safetensors", "model.layers.59.input_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.19.input_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.59.mlp.down_proj.weight": "model-00005-of-00016.safetensors", "model.layers.19.mlp.down_proj.weight": "model-00005-of-00016.safetensors", "model.layers.59.mlp.gate_proj.weight": "model-00005-of-00016.safetensors", "model.layers.19.mlp.gate_proj.weight": "model-00005-of-00016.safetensors", "model.layers.59.mlp.up_proj.weight": "model-00005-of-00016.safetensors", "model.layers.19.mlp.up_proj.weight": "model-00005-of-00016.safetensors", "model.layers.59.post_attention_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.19.post_attention_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.59.self_attn.k_proj.weight": "model-00005-of-00016.safetensors", "model.layers.19.self_attn.k_proj.weight": "model-00005-of-00016.safetensors", "model.layers.59.self_attn.o_proj.weight": "model-00005-of-00016.safetensors", "model.layers.19.self_attn.o_proj.weight": "model-00005-of-00016.safetensors", "model.layers.59.self_attn.q_proj.weight": "model-00005-of-00016.safetensors", "model.layers.19.self_attn.q_proj.weight": "model-00005-of-00016.safetensors", "model.layers.59.self_attn.v_proj.weight": "model-00005-of-00016.safetensors", "model.layers.19.self_attn.v_proj.weight": "model-00005-of-00016.safetensors", "model.layers.42.input_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.2.input_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.42.mlp.down_proj.weight": "model-00005-of-00016.safetensors", "model.layers.2.mlp.down_proj.weight": "model-00005-of-00016.safetensors", "model.layers.42.mlp.gate_proj.weight": "model-00005-of-00016.safetensors", "model.layers.2.mlp.gate_proj.weight": "model-00005-of-00016.safetensors", "model.layers.42.mlp.up_proj.weight": "model-00005-of-00016.safetensors", "model.layers.2.mlp.up_proj.weight": "model-00005-of-00016.safetensors", "model.layers.42.post_attention_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.2.post_attention_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.42.self_attn.k_proj.weight": "model-00005-of-00016.safetensors", "model.layers.2.self_attn.k_proj.weight": "model-00005-of-00016.safetensors", "model.layers.42.self_attn.o_proj.weight": "model-00005-of-00016.safetensors", "model.layers.2.self_attn.o_proj.weight": "model-00005-of-00016.safetensors", "model.layers.42.self_attn.q_proj.weight": "model-00005-of-00016.safetensors", "model.layers.2.self_attn.q_proj.weight": "model-00005-of-00016.safetensors", "model.layers.42.self_attn.v_proj.weight": "model-00005-of-00016.safetensors", "model.layers.2.self_attn.v_proj.weight": "model-00005-of-00016.safetensors", "model.layers.60.input_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.20.input_layernorm.weight": "model-00005-of-00016.safetensors", "model.layers.60.mlp.down_proj.weight": "model-00005-of-00016.safetensors", "model.layers.20.mlp.down_proj.weight": "model-00005-of-00016.safetensors", "model.layers.60.mlp.gate_proj.weight": "model-00005-of-00016.safetensors", "model.layers.20.mlp.gate_proj.weight": "model-00005-of-00016.safetensors", "model.layers.60.mlp.up_proj.weight": "model-00006-of-00016.safetensors", "model.layers.20.mlp.up_proj.weight": "model-00006-of-00016.safetensors", "model.layers.60.post_attention_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.20.post_attention_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.60.self_attn.k_proj.weight": "model-00006-of-00016.safetensors", "model.layers.20.self_attn.k_proj.weight": "model-00006-of-00016.safetensors", "model.layers.60.self_attn.o_proj.weight": "model-00006-of-00016.safetensors", "model.layers.20.self_attn.o_proj.weight": "model-00006-of-00016.safetensors", "model.layers.60.self_attn.q_proj.weight": "model-00006-of-00016.safetensors", "model.layers.20.self_attn.q_proj.weight": "model-00006-of-00016.safetensors", "model.layers.60.self_attn.v_proj.weight": "model-00006-of-00016.safetensors", "model.layers.20.self_attn.v_proj.weight": "model-00006-of-00016.safetensors", "model.layers.61.input_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.21.input_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.61.mlp.down_proj.weight": "model-00006-of-00016.safetensors", "model.layers.21.mlp.down_proj.weight": "model-00006-of-00016.safetensors", "model.layers.61.mlp.gate_proj.weight": "model-00006-of-00016.safetensors", "model.layers.21.mlp.gate_proj.weight": "model-00006-of-00016.safetensors", "model.layers.61.mlp.up_proj.weight": "model-00006-of-00016.safetensors", "model.layers.21.mlp.up_proj.weight": "model-00006-of-00016.safetensors", "model.layers.61.post_attention_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.21.post_attention_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.61.self_attn.k_proj.weight": "model-00006-of-00016.safetensors", "model.layers.21.self_attn.k_proj.weight": "model-00006-of-00016.safetensors", "model.layers.61.self_attn.o_proj.weight": "model-00006-of-00016.safetensors", "model.layers.21.self_attn.o_proj.weight": "model-00006-of-00016.safetensors", "model.layers.61.self_attn.q_proj.weight": "model-00006-of-00016.safetensors", "model.layers.21.self_attn.q_proj.weight": "model-00006-of-00016.safetensors", "model.layers.61.self_attn.v_proj.weight": "model-00006-of-00016.safetensors", "model.layers.21.self_attn.v_proj.weight": "model-00006-of-00016.safetensors", "model.layers.62.input_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.22.input_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.62.mlp.down_proj.weight": "model-00006-of-00016.safetensors", "model.layers.22.mlp.down_proj.weight": "model-00006-of-00016.safetensors", "model.layers.62.mlp.gate_proj.weight": "model-00006-of-00016.safetensors", "model.layers.22.mlp.gate_proj.weight": "model-00006-of-00016.safetensors", "model.layers.62.mlp.up_proj.weight": "model-00006-of-00016.safetensors", "model.layers.22.mlp.up_proj.weight": "model-00006-of-00016.safetensors", "model.layers.62.post_attention_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.22.post_attention_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.62.self_attn.k_proj.weight": "model-00006-of-00016.safetensors", "model.layers.22.self_attn.k_proj.weight": "model-00006-of-00016.safetensors", "model.layers.62.self_attn.o_proj.weight": "model-00006-of-00016.safetensors", "model.layers.22.self_attn.o_proj.weight": "model-00006-of-00016.safetensors", "model.layers.62.self_attn.q_proj.weight": "model-00006-of-00016.safetensors", "model.layers.22.self_attn.q_proj.weight": "model-00006-of-00016.safetensors", "model.layers.62.self_attn.v_proj.weight": "model-00006-of-00016.safetensors", "model.layers.22.self_attn.v_proj.weight": "model-00006-of-00016.safetensors", "model.layers.63.input_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.23.input_layernorm.weight": "model-00006-of-00016.safetensors", "model.layers.63.mlp.down_proj.weight": "model-00006-of-00016.safetensors", "model.layers.23.mlp.down_proj.weight": "model-00006-of-00016.safetensors", "model.layers.63.mlp.gate_proj.weight": "model-00006-of-00016.safetensors", "model.layers.23.mlp.gate_proj.weight": "model-00007-of-00016.safetensors", "model.layers.63.mlp.up_proj.weight": "model-00007-of-00016.safetensors", "model.layers.23.mlp.up_proj.weight": "model-00007-of-00016.safetensors", "model.layers.63.post_attention_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.23.post_attention_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.63.self_attn.k_proj.weight": "model-00007-of-00016.safetensors", "model.layers.23.self_attn.k_proj.weight": "model-00007-of-00016.safetensors", "model.layers.63.self_attn.o_proj.weight": "model-00007-of-00016.safetensors", "model.layers.23.self_attn.o_proj.weight": "model-00007-of-00016.safetensors", "model.layers.63.self_attn.q_proj.weight": "model-00007-of-00016.safetensors", "model.layers.23.self_attn.q_proj.weight": "model-00007-of-00016.safetensors", "model.layers.63.self_attn.v_proj.weight": "model-00007-of-00016.safetensors", "model.layers.23.self_attn.v_proj.weight": "model-00007-of-00016.safetensors", "model.layers.64.input_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.24.input_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.64.mlp.down_proj.weight": "model-00007-of-00016.safetensors", "model.layers.24.mlp.down_proj.weight": "model-00007-of-00016.safetensors", "model.layers.64.mlp.gate_proj.weight": "model-00007-of-00016.safetensors", "model.layers.24.mlp.gate_proj.weight": "model-00007-of-00016.safetensors", "model.layers.64.mlp.up_proj.weight": "model-00007-of-00016.safetensors", "model.layers.24.mlp.up_proj.weight": "model-00007-of-00016.safetensors", "model.layers.64.post_attention_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.24.post_attention_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.64.self_attn.k_proj.weight": "model-00007-of-00016.safetensors", "model.layers.24.self_attn.k_proj.weight": "model-00007-of-00016.safetensors", "model.layers.64.self_attn.o_proj.weight": "model-00007-of-00016.safetensors", "model.layers.24.self_attn.o_proj.weight": "model-00007-of-00016.safetensors", "model.layers.64.self_attn.q_proj.weight": "model-00007-of-00016.safetensors", "model.layers.24.self_attn.q_proj.weight": "model-00007-of-00016.safetensors", "model.layers.64.self_attn.v_proj.weight": "model-00007-of-00016.safetensors", "model.layers.24.self_attn.v_proj.weight": "model-00007-of-00016.safetensors", "model.layers.65.input_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.25.input_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.65.mlp.down_proj.weight": "model-00007-of-00016.safetensors", "model.layers.25.mlp.down_proj.weight": "model-00007-of-00016.safetensors", "model.layers.65.mlp.gate_proj.weight": "model-00007-of-00016.safetensors", "model.layers.25.mlp.gate_proj.weight": "model-00007-of-00016.safetensors", "model.layers.65.mlp.up_proj.weight": "model-00007-of-00016.safetensors", "model.layers.25.mlp.up_proj.weight": "model-00007-of-00016.safetensors", "model.layers.65.post_attention_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.25.post_attention_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.65.self_attn.k_proj.weight": "model-00007-of-00016.safetensors", "model.layers.25.self_attn.k_proj.weight": "model-00007-of-00016.safetensors", "model.layers.65.self_attn.o_proj.weight": "model-00007-of-00016.safetensors", "model.layers.25.self_attn.o_proj.weight": "model-00007-of-00016.safetensors", "model.layers.65.self_attn.q_proj.weight": "model-00007-of-00016.safetensors", "model.layers.25.self_attn.q_proj.weight": "model-00007-of-00016.safetensors", "model.layers.65.self_attn.v_proj.weight": "model-00007-of-00016.safetensors", "model.layers.25.self_attn.v_proj.weight": "model-00007-of-00016.safetensors", "model.layers.66.input_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.26.input_layernorm.weight": "model-00007-of-00016.safetensors", "model.layers.66.mlp.down_proj.weight": "model-00007-of-00016.safetensors", "model.layers.26.mlp.down_proj.weight": "model-00007-of-00016.safetensors", "model.layers.66.mlp.gate_proj.weight": "model-00008-of-00016.safetensors", "model.layers.26.mlp.gate_proj.weight": "model-00008-of-00016.safetensors", "model.layers.66.mlp.up_proj.weight": "model-00008-of-00016.safetensors", "model.layers.26.mlp.up_proj.weight": "model-00008-of-00016.safetensors", "model.layers.66.post_attention_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.26.post_attention_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.66.self_attn.k_proj.weight": "model-00008-of-00016.safetensors", "model.layers.26.self_attn.k_proj.weight": "model-00008-of-00016.safetensors", "model.layers.66.self_attn.o_proj.weight": "model-00008-of-00016.safetensors", "model.layers.26.self_attn.o_proj.weight": "model-00008-of-00016.safetensors", "model.layers.66.self_attn.q_proj.weight": "model-00008-of-00016.safetensors", "model.layers.26.self_attn.q_proj.weight": "model-00008-of-00016.safetensors", "model.layers.66.self_attn.v_proj.weight": "model-00008-of-00016.safetensors", "model.layers.26.self_attn.v_proj.weight": "model-00008-of-00016.safetensors", "model.layers.67.input_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.27.input_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.67.mlp.down_proj.weight": "model-00008-of-00016.safetensors", "model.layers.27.mlp.down_proj.weight": "model-00008-of-00016.safetensors", "model.layers.67.mlp.gate_proj.weight": "model-00008-of-00016.safetensors", "model.layers.27.mlp.gate_proj.weight": "model-00008-of-00016.safetensors", "model.layers.67.mlp.up_proj.weight": "model-00008-of-00016.safetensors", "model.layers.27.mlp.up_proj.weight": "model-00008-of-00016.safetensors", "model.layers.67.post_attention_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.27.post_attention_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.67.self_attn.k_proj.weight": "model-00008-of-00016.safetensors", "model.layers.27.self_attn.k_proj.weight": "model-00008-of-00016.safetensors", "model.layers.67.self_attn.o_proj.weight": "model-00008-of-00016.safetensors", "model.layers.27.self_attn.o_proj.weight": "model-00008-of-00016.safetensors", "model.layers.67.self_attn.q_proj.weight": "model-00008-of-00016.safetensors", "model.layers.27.self_attn.q_proj.weight": "model-00008-of-00016.safetensors", "model.layers.67.self_attn.v_proj.weight": "model-00008-of-00016.safetensors", "model.layers.27.self_attn.v_proj.weight": "model-00008-of-00016.safetensors", "model.layers.68.input_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.28.input_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.68.mlp.down_proj.weight": "model-00008-of-00016.safetensors", "model.layers.28.mlp.down_proj.weight": "model-00008-of-00016.safetensors", "model.layers.68.mlp.gate_proj.weight": "model-00008-of-00016.safetensors", "model.layers.28.mlp.gate_proj.weight": "model-00008-of-00016.safetensors", "model.layers.68.mlp.up_proj.weight": "model-00008-of-00016.safetensors", "model.layers.28.mlp.up_proj.weight": "model-00008-of-00016.safetensors", "model.layers.68.post_attention_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.28.post_attention_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.68.self_attn.k_proj.weight": "model-00008-of-00016.safetensors", "model.layers.28.self_attn.k_proj.weight": "model-00008-of-00016.safetensors", "model.layers.68.self_attn.o_proj.weight": "model-00008-of-00016.safetensors", "model.layers.28.self_attn.o_proj.weight": "model-00008-of-00016.safetensors", "model.layers.68.self_attn.q_proj.weight": "model-00008-of-00016.safetensors", "model.layers.28.self_attn.q_proj.weight": "model-00008-of-00016.safetensors", "model.layers.68.self_attn.v_proj.weight": "model-00008-of-00016.safetensors", "model.layers.28.self_attn.v_proj.weight": "model-00008-of-00016.safetensors", "model.layers.69.input_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.29.input_layernorm.weight": "model-00008-of-00016.safetensors", "model.layers.69.mlp.down_proj.weight": "model-00008-of-00016.safetensors", "model.layers.29.mlp.down_proj.weight": "model-00009-of-00016.safetensors", "model.layers.69.mlp.gate_proj.weight": "model-00009-of-00016.safetensors", "model.layers.29.mlp.gate_proj.weight": "model-00009-of-00016.safetensors", "model.layers.69.mlp.up_proj.weight": "model-00009-of-00016.safetensors", "model.layers.29.mlp.up_proj.weight": "model-00009-of-00016.safetensors", "model.layers.69.post_attention_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.29.post_attention_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.69.self_attn.k_proj.weight": "model-00009-of-00016.safetensors", "model.layers.29.self_attn.k_proj.weight": "model-00009-of-00016.safetensors", "model.layers.69.self_attn.o_proj.weight": "model-00009-of-00016.safetensors", "model.layers.29.self_attn.o_proj.weight": "model-00009-of-00016.safetensors", "model.layers.69.self_attn.q_proj.weight": "model-00009-of-00016.safetensors", "model.layers.29.self_attn.q_proj.weight": "model-00009-of-00016.safetensors", "model.layers.69.self_attn.v_proj.weight": "model-00009-of-00016.safetensors", "model.layers.29.self_attn.v_proj.weight": "model-00009-of-00016.safetensors", "model.layers.43.input_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.3.input_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.43.mlp.down_proj.weight": "model-00009-of-00016.safetensors", "model.layers.3.mlp.down_proj.weight": "model-00009-of-00016.safetensors", "model.layers.43.mlp.gate_proj.weight": "model-00009-of-00016.safetensors", "model.layers.3.mlp.gate_proj.weight": "model-00009-of-00016.safetensors", "model.layers.43.mlp.up_proj.weight": "model-00009-of-00016.safetensors", "model.layers.3.mlp.up_proj.weight": "model-00009-of-00016.safetensors", "model.layers.43.post_attention_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.3.post_attention_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.43.self_attn.k_proj.weight": "model-00009-of-00016.safetensors", "model.layers.3.self_attn.k_proj.weight": "model-00009-of-00016.safetensors", "model.layers.43.self_attn.o_proj.weight": "model-00009-of-00016.safetensors", "model.layers.3.self_attn.o_proj.weight": "model-00009-of-00016.safetensors", "model.layers.43.self_attn.q_proj.weight": "model-00009-of-00016.safetensors", "model.layers.3.self_attn.q_proj.weight": "model-00009-of-00016.safetensors", "model.layers.43.self_attn.v_proj.weight": "model-00009-of-00016.safetensors", "model.layers.3.self_attn.v_proj.weight": "model-00009-of-00016.safetensors", "model.layers.70.input_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.30.input_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.70.mlp.down_proj.weight": "model-00009-of-00016.safetensors", "model.layers.30.mlp.down_proj.weight": "model-00009-of-00016.safetensors", "model.layers.70.mlp.gate_proj.weight": "model-00009-of-00016.safetensors", "model.layers.30.mlp.gate_proj.weight": "model-00009-of-00016.safetensors", "model.layers.70.mlp.up_proj.weight": "model-00009-of-00016.safetensors", "model.layers.30.mlp.up_proj.weight": "model-00009-of-00016.safetensors", "model.layers.70.post_attention_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.30.post_attention_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.70.self_attn.k_proj.weight": "model-00009-of-00016.safetensors", "model.layers.30.self_attn.k_proj.weight": "model-00009-of-00016.safetensors", "model.layers.70.self_attn.o_proj.weight": "model-00009-of-00016.safetensors", "model.layers.30.self_attn.o_proj.weight": "model-00009-of-00016.safetensors", "model.layers.70.self_attn.q_proj.weight": "model-00009-of-00016.safetensors", "model.layers.30.self_attn.q_proj.weight": "model-00009-of-00016.safetensors", "model.layers.70.self_attn.v_proj.weight": "model-00009-of-00016.safetensors", "model.layers.30.self_attn.v_proj.weight": "model-00009-of-00016.safetensors", "model.layers.71.input_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.31.input_layernorm.weight": "model-00009-of-00016.safetensors", "model.layers.71.mlp.down_proj.weight": "model-00010-of-00016.safetensors", "model.layers.31.mlp.down_proj.weight": "model-00010-of-00016.safetensors", "model.layers.71.mlp.gate_proj.weight": "model-00010-of-00016.safetensors", "model.layers.31.mlp.gate_proj.weight": "model-00010-of-00016.safetensors", "model.layers.71.mlp.up_proj.weight": "model-00010-of-00016.safetensors", "model.layers.31.mlp.up_proj.weight": "model-00010-of-00016.safetensors", "model.layers.71.post_attention_layernorm.weight": "model-00010-of-00016.safetensors", "model.layers.31.post_attention_layernorm.weight": "model-00010-of-00016.safetensors", "model.layers.71.self_attn.k_proj.weight": "model-00010-of-00016.safetensors", "model.layers.31.self_attn.k_proj.weight": "model-00010-of-00016.safetensors", "model.layers.71.self_attn.o_proj.weight": "model-00010-of-00016.safetensors", "model.layers.31.self_attn.o_proj.weight": "model-00010-of-00016.safetensors", "model.layers.71.self_attn.q_proj.weight": "model-00010-of-00016.safetensors", "model.layers.31.self_attn.q_proj.weight": "model-00010-of-00016.safetensors", "model.layers.71.self_attn.v_proj.weight": "model-00010-of-00016.safetensors", "model.layers.31.self_attn.v_proj.weight": "model-00010-of-00016.safetensors", "model.layers.72.input_layernorm.weight": "model-00010-of-00016.safetensors", "model.layers.32.input_layernorm.weight": "model-00010-of-00016.safetensors", "model.layers.72.mlp.down_proj.weight": "model-00010-of-00016.safetensors", "model.layers.32.mlp.down_proj.weight": "model-00010-of-00016.safetensors", "model.layers.72.mlp.gate_proj.weight": "model-00010-of-00016.safetensors", "model.layers.32.mlp.gate_proj.weight": "model-00010-of-00016.safetensors", "model.layers.72.mlp.up_proj.weight": "model-00010-of-00016.safetensors", "model.layers.32.mlp.up_proj.weight": "model-00010-of-00016.safetensors", "model.layers.72.post_attention_layernorm.weight": "model-00010-of-00016.safetensors", "model.layers.32.post_attention_layernorm.weight": "model-00010-of-00016.safetensors", "model.layers.72.self_attn.k_proj.weight": "model-00010-of-00016.safetensors", "model.layers.32.self_attn.k_proj.weight": "model-00010-of-00016.safetensors", "model.layers.72.self_attn.o_proj.weight": "model-00010-of-00016.safetensors", "model.layers.32.self_attn.o_proj.weight": "model-00010-of-00016.safetensors", "model.layers.72.self_attn.q_proj.weight": "model-00010-of-00016.safetensors", "model.layers.32.self_attn.q_proj.weight": "model-00010-of-00016.safetensors", "model.layers.72.self_attn.v_proj.weight": "model-00010-of-00016.safetensors", "model.layers.32.self_attn.v_proj.weight": "model-00010-of-00016.safetensors", "model.layers.73.input_layernorm.weight": "model-00010-of-00016.safetensors", "model.layers.33.input_layernorm.weight": "model-00010-of-00016.safetensors", "model.layers.73.mlp.down_proj.weight": "model-00010-of-00016.safetensors", "model.layers.33.mlp.down_proj.weight": "model-00010-of-00016.safetensors", "model.layers.73.mlp.gate_proj.weight": "model-00010-of-00016.safetensors", "model.layers.33.mlp.gate_proj.weight": "model-00010-of-00016.safetensors", "model.layers.73.mlp.up_proj.weight": "model-00010-of-00016.safetensors", "model.layers.33.mlp.up_proj.weight": "model-00010-of-00016.safetensors", "model.layers.73.post_attention_layernorm.weight": "model-00010-of-00016.safetensors", "model.layers.33.post_attention_layernorm.weight": "model-00010-of-00016.safetensors", "model.layers.73.self_attn.k_proj.weight": "model-00010-of-00016.safetensors", "model.layers.33.self_attn.k_proj.weight": "model-00010-of-00016.safetensors", "model.layers.73.self_attn.o_proj.weight": "model-00010-of-00016.safetensors", "model.layers.33.self_attn.o_proj.weight": "model-00011-of-00016.safetensors", "model.layers.73.self_attn.q_proj.weight": "model-00011-of-00016.safetensors", "model.layers.33.self_attn.q_proj.weight": "model-00011-of-00016.safetensors", "model.layers.73.self_attn.v_proj.weight": "model-00011-of-00016.safetensors", "model.layers.33.self_attn.v_proj.weight": "model-00011-of-00016.safetensors", "model.layers.74.input_layernorm.weight": "model-00011-of-00016.safetensors", "model.layers.34.input_layernorm.weight": "model-00011-of-00016.safetensors", "model.layers.74.mlp.down_proj.weight": "model-00011-of-00016.safetensors", "model.layers.34.mlp.down_proj.weight": "model-00011-of-00016.safetensors", "model.layers.74.mlp.gate_proj.weight": "model-00011-of-00016.safetensors", "model.layers.34.mlp.gate_proj.weight": "model-00011-of-00016.safetensors", "model.layers.74.mlp.up_proj.weight": "model-00011-of-00016.safetensors", "model.layers.34.mlp.up_proj.weight": "model-00011-of-00016.safetensors", "model.layers.74.post_attention_layernorm.weight": "model-00011-of-00016.safetensors", "model.layers.34.post_attention_layernorm.weight": "model-00011-of-00016.safetensors", "model.layers.74.self_attn.k_proj.weight": "model-00011-of-00016.safetensors", "model.layers.34.self_attn.k_proj.weight": "model-00011-of-00016.safetensors", "model.layers.74.self_attn.o_proj.weight": "model-00011-of-00016.safetensors", "model.layers.34.self_attn.o_proj.weight": "model-00011-of-00016.safetensors", "model.layers.74.self_attn.q_proj.weight": "model-00011-of-00016.safetensors", "model.layers.34.self_attn.q_proj.weight": "model-00011-of-00016.safetensors", "model.layers.74.self_attn.v_proj.weight": "model-00011-of-00016.safetensors", "model.layers.34.self_attn.v_proj.weight": "model-00011-of-00016.safetensors", "model.layers.75.input_layernorm.weight": "model-00011-of-00016.safetensors", "model.layers.35.input_layernorm.weight": "model-00011-of-00016.safetensors", "model.layers.75.mlp.down_proj.weight": "model-00011-of-00016.safetensors", "model.layers.35.mlp.down_proj.weight": "model-00011-of-00016.safetensors", "model.layers.75.mlp.gate_proj.weight": "model-00011-of-00016.safetensors", "model.layers.35.mlp.gate_proj.weight": "model-00011-of-00016.safetensors", "model.layers.75.mlp.up_proj.weight": "model-00011-of-00016.safetensors", "model.layers.35.mlp.up_proj.weight": "model-00011-of-00016.safetensors", "model.layers.75.post_attention_layernorm.weight": "model-00011-of-00016.safetensors", "model.layers.35.post_attention_layernorm.weight": "model-00011-of-00016.safetensors", "model.layers.75.self_attn.k_proj.weight": "model-00011-of-00016.safetensors", "model.layers.35.self_attn.k_proj.weight": "model-00011-of-00016.safetensors", "model.layers.75.self_attn.o_proj.weight": "model-00011-of-00016.safetensors", "model.layers.35.self_attn.o_proj.weight": "model-00011-of-00016.safetensors", "model.layers.75.self_attn.q_proj.weight": "model-00011-of-00016.safetensors", "model.layers.35.self_attn.q_proj.weight": "model-00011-of-00016.safetensors", "model.layers.75.self_attn.v_proj.weight": "model-00011-of-00016.safetensors", "model.layers.35.self_attn.v_proj.weight": "model-00011-of-00016.safetensors", "model.layers.76.input_layernorm.weight": "model-00011-of-00016.safetensors", "model.layers.36.input_layernorm.weight": "model-00011-of-00016.safetensors", "model.layers.76.mlp.down_proj.weight": "model-00011-of-00016.safetensors", "model.layers.36.mlp.down_proj.weight": "model-00011-of-00016.safetensors", "model.layers.76.mlp.gate_proj.weight": "model-00011-of-00016.safetensors", "model.layers.36.mlp.gate_proj.weight": "model-00011-of-00016.safetensors", "model.layers.76.mlp.up_proj.weight": "model-00011-of-00016.safetensors", "model.layers.36.mlp.up_proj.weight": "model-00012-of-00016.safetensors", "model.layers.76.post_attention_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.36.post_attention_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.76.self_attn.k_proj.weight": "model-00012-of-00016.safetensors", "model.layers.36.self_attn.k_proj.weight": "model-00012-of-00016.safetensors", "model.layers.76.self_attn.o_proj.weight": "model-00012-of-00016.safetensors", "model.layers.36.self_attn.o_proj.weight": "model-00012-of-00016.safetensors", "model.layers.76.self_attn.q_proj.weight": "model-00012-of-00016.safetensors", "model.layers.36.self_attn.q_proj.weight": "model-00012-of-00016.safetensors", "model.layers.76.self_attn.v_proj.weight": "model-00012-of-00016.safetensors", "model.layers.36.self_attn.v_proj.weight": "model-00012-of-00016.safetensors", "model.layers.77.input_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.37.input_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.77.mlp.down_proj.weight": "model-00012-of-00016.safetensors", "model.layers.37.mlp.down_proj.weight": "model-00012-of-00016.safetensors", "model.layers.77.mlp.gate_proj.weight": "model-00012-of-00016.safetensors", "model.layers.37.mlp.gate_proj.weight": "model-00012-of-00016.safetensors", "model.layers.77.mlp.up_proj.weight": "model-00012-of-00016.safetensors", "model.layers.37.mlp.up_proj.weight": "model-00012-of-00016.safetensors", "model.layers.77.post_attention_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.37.post_attention_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.77.self_attn.k_proj.weight": "model-00012-of-00016.safetensors", "model.layers.37.self_attn.k_proj.weight": "model-00012-of-00016.safetensors", "model.layers.77.self_attn.o_proj.weight": "model-00012-of-00016.safetensors", "model.layers.37.self_attn.o_proj.weight": "model-00012-of-00016.safetensors", "model.layers.77.self_attn.q_proj.weight": "model-00012-of-00016.safetensors", "model.layers.37.self_attn.q_proj.weight": "model-00012-of-00016.safetensors", "model.layers.77.self_attn.v_proj.weight": "model-00012-of-00016.safetensors", "model.layers.37.self_attn.v_proj.weight": "model-00012-of-00016.safetensors", "model.layers.78.input_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.38.input_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.78.mlp.down_proj.weight": "model-00012-of-00016.safetensors", "model.layers.38.mlp.down_proj.weight": "model-00012-of-00016.safetensors", "model.layers.78.mlp.gate_proj.weight": "model-00012-of-00016.safetensors", "model.layers.38.mlp.gate_proj.weight": "model-00012-of-00016.safetensors", "model.layers.78.mlp.up_proj.weight": "model-00012-of-00016.safetensors", "model.layers.38.mlp.up_proj.weight": "model-00012-of-00016.safetensors", "model.layers.78.post_attention_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.38.post_attention_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.78.self_attn.k_proj.weight": "model-00012-of-00016.safetensors", "model.layers.38.self_attn.k_proj.weight": "model-00012-of-00016.safetensors", "model.layers.78.self_attn.o_proj.weight": "model-00012-of-00016.safetensors", "model.layers.38.self_attn.o_proj.weight": "model-00012-of-00016.safetensors", "model.layers.78.self_attn.q_proj.weight": "model-00012-of-00016.safetensors", "model.layers.38.self_attn.q_proj.weight": "model-00012-of-00016.safetensors", "model.layers.78.self_attn.v_proj.weight": "model-00012-of-00016.safetensors", "model.layers.38.self_attn.v_proj.weight": "model-00012-of-00016.safetensors", "model.layers.79.input_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.39.input_layernorm.weight": "model-00012-of-00016.safetensors", "model.layers.79.mlp.down_proj.weight": "model-00012-of-00016.safetensors", "model.layers.39.mlp.down_proj.weight": "model-00012-of-00016.safetensors", "model.layers.79.mlp.gate_proj.weight": "model-00012-of-00016.safetensors", "model.layers.39.mlp.gate_proj.weight": "model-00012-of-00016.safetensors", "model.layers.79.mlp.up_proj.weight": "model-00013-of-00016.safetensors", "model.layers.39.mlp.up_proj.weight": "model-00013-of-00016.safetensors", "model.layers.79.post_attention_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.39.post_attention_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.79.self_attn.k_proj.weight": "model-00013-of-00016.safetensors", "model.layers.39.self_attn.k_proj.weight": "model-00013-of-00016.safetensors", "model.layers.79.self_attn.o_proj.weight": "model-00013-of-00016.safetensors", "model.layers.39.self_attn.o_proj.weight": "model-00013-of-00016.safetensors", "model.layers.79.self_attn.q_proj.weight": "model-00013-of-00016.safetensors", "model.layers.39.self_attn.q_proj.weight": "model-00013-of-00016.safetensors", "model.layers.79.self_attn.v_proj.weight": "model-00013-of-00016.safetensors", "model.layers.39.self_attn.v_proj.weight": "model-00013-of-00016.safetensors", "model.layers.44.input_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.4.input_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.44.mlp.down_proj.weight": "model-00013-of-00016.safetensors", "model.layers.4.mlp.down_proj.weight": "model-00013-of-00016.safetensors", "model.layers.44.mlp.gate_proj.weight": "model-00013-of-00016.safetensors", "model.layers.4.mlp.gate_proj.weight": "model-00013-of-00016.safetensors", "model.layers.44.mlp.up_proj.weight": "model-00013-of-00016.safetensors", "model.layers.4.mlp.up_proj.weight": "model-00013-of-00016.safetensors", "model.layers.44.post_attention_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.4.post_attention_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.44.self_attn.k_proj.weight": "model-00013-of-00016.safetensors", "model.layers.4.self_attn.k_proj.weight": "model-00013-of-00016.safetensors", "model.layers.44.self_attn.o_proj.weight": "model-00013-of-00016.safetensors", "model.layers.4.self_attn.o_proj.weight": "model-00013-of-00016.safetensors", "model.layers.44.self_attn.q_proj.weight": "model-00013-of-00016.safetensors", "model.layers.4.self_attn.q_proj.weight": "model-00013-of-00016.safetensors", "model.layers.44.self_attn.v_proj.weight": "model-00013-of-00016.safetensors", "model.layers.4.self_attn.v_proj.weight": "model-00013-of-00016.safetensors", "model.layers.80.input_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.80.mlp.down_proj.weight": "model-00013-of-00016.safetensors", "model.layers.80.mlp.gate_proj.weight": "model-00013-of-00016.safetensors", "model.layers.80.mlp.up_proj.weight": "model-00013-of-00016.safetensors", "model.layers.80.post_attention_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.80.self_attn.k_proj.weight": "model-00013-of-00016.safetensors", "model.layers.80.self_attn.o_proj.weight": "model-00013-of-00016.safetensors", "model.layers.80.self_attn.q_proj.weight": "model-00013-of-00016.safetensors", "model.layers.80.self_attn.v_proj.weight": "model-00013-of-00016.safetensors", "model.layers.81.input_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.81.mlp.down_proj.weight": "model-00013-of-00016.safetensors", "model.layers.81.mlp.gate_proj.weight": "model-00013-of-00016.safetensors", "model.layers.81.mlp.up_proj.weight": "model-00013-of-00016.safetensors", "model.layers.81.post_attention_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.81.self_attn.k_proj.weight": "model-00013-of-00016.safetensors", "model.layers.81.self_attn.o_proj.weight": "model-00013-of-00016.safetensors", "model.layers.81.self_attn.q_proj.weight": "model-00013-of-00016.safetensors", "model.layers.81.self_attn.v_proj.weight": "model-00013-of-00016.safetensors", "model.layers.82.input_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.82.mlp.down_proj.weight": "model-00013-of-00016.safetensors", "model.layers.82.mlp.gate_proj.weight": "model-00013-of-00016.safetensors", "model.layers.82.mlp.up_proj.weight": "model-00013-of-00016.safetensors", "model.layers.82.post_attention_layernorm.weight": "model-00013-of-00016.safetensors", "model.layers.82.self_attn.k_proj.weight": "model-00014-of-00016.safetensors", "model.layers.82.self_attn.o_proj.weight": "model-00014-of-00016.safetensors", "model.layers.82.self_attn.q_proj.weight": "model-00014-of-00016.safetensors", "model.layers.82.self_attn.v_proj.weight": "model-00014-of-00016.safetensors", "model.layers.83.input_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.83.mlp.down_proj.weight": "model-00014-of-00016.safetensors", "model.layers.83.mlp.gate_proj.weight": "model-00014-of-00016.safetensors", "model.layers.83.mlp.up_proj.weight": "model-00014-of-00016.safetensors", "model.layers.83.post_attention_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.83.self_attn.k_proj.weight": "model-00014-of-00016.safetensors", "model.layers.83.self_attn.o_proj.weight": "model-00014-of-00016.safetensors", "model.layers.83.self_attn.q_proj.weight": "model-00014-of-00016.safetensors", "model.layers.83.self_attn.v_proj.weight": "model-00014-of-00016.safetensors", "model.layers.84.input_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.84.mlp.down_proj.weight": "model-00014-of-00016.safetensors", "model.layers.84.mlp.gate_proj.weight": "model-00014-of-00016.safetensors", "model.layers.84.mlp.up_proj.weight": "model-00014-of-00016.safetensors", "model.layers.84.post_attention_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.84.self_attn.k_proj.weight": "model-00014-of-00016.safetensors", "model.layers.84.self_attn.o_proj.weight": "model-00014-of-00016.safetensors", "model.layers.84.self_attn.q_proj.weight": "model-00014-of-00016.safetensors", "model.layers.84.self_attn.v_proj.weight": "model-00014-of-00016.safetensors", "model.layers.85.input_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.85.mlp.down_proj.weight": "model-00014-of-00016.safetensors", "model.layers.85.mlp.gate_proj.weight": "model-00014-of-00016.safetensors", "model.layers.85.mlp.up_proj.weight": "model-00014-of-00016.safetensors", "model.layers.85.post_attention_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.85.self_attn.k_proj.weight": "model-00014-of-00016.safetensors", "model.layers.85.self_attn.o_proj.weight": "model-00014-of-00016.safetensors", "model.layers.85.self_attn.q_proj.weight": "model-00014-of-00016.safetensors", "model.layers.85.self_attn.v_proj.weight": "model-00014-of-00016.safetensors", "model.layers.86.input_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.86.mlp.down_proj.weight": "model-00014-of-00016.safetensors", "model.layers.86.mlp.gate_proj.weight": "model-00014-of-00016.safetensors", "model.layers.86.mlp.up_proj.weight": "model-00014-of-00016.safetensors", "model.layers.86.post_attention_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.86.self_attn.k_proj.weight": "model-00014-of-00016.safetensors", "model.layers.86.self_attn.o_proj.weight": "model-00014-of-00016.safetensors", "model.layers.86.self_attn.q_proj.weight": "model-00014-of-00016.safetensors", "model.layers.86.self_attn.v_proj.weight": "model-00014-of-00016.safetensors", "model.layers.87.input_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.87.mlp.down_proj.weight": "model-00014-of-00016.safetensors", "model.layers.87.mlp.gate_proj.weight": "model-00014-of-00016.safetensors", "model.layers.87.mlp.up_proj.weight": "model-00014-of-00016.safetensors", "model.layers.87.post_attention_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.87.self_attn.k_proj.weight": "model-00014-of-00016.safetensors", "model.layers.87.self_attn.o_proj.weight": "model-00014-of-00016.safetensors", "model.layers.87.self_attn.q_proj.weight": "model-00014-of-00016.safetensors", "model.layers.87.self_attn.v_proj.weight": "model-00014-of-00016.safetensors", "model.layers.45.input_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.5.input_layernorm.weight": "model-00014-of-00016.safetensors", "model.layers.45.mlp.down_proj.weight": "model-00014-of-00016.safetensors", "model.layers.5.mlp.down_proj.weight": "model-00014-of-00016.safetensors", "model.layers.45.mlp.gate_proj.weight": "model-00015-of-00016.safetensors", "model.layers.5.mlp.gate_proj.weight": "model-00015-of-00016.safetensors", "model.layers.45.mlp.up_proj.weight": "model-00015-of-00016.safetensors", "model.layers.5.mlp.up_proj.weight": "model-00015-of-00016.safetensors", "model.layers.45.post_attention_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.5.post_attention_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.45.self_attn.k_proj.weight": "model-00015-of-00016.safetensors", "model.layers.5.self_attn.k_proj.weight": "model-00015-of-00016.safetensors", "model.layers.45.self_attn.o_proj.weight": "model-00015-of-00016.safetensors", "model.layers.5.self_attn.o_proj.weight": "model-00015-of-00016.safetensors", "model.layers.45.self_attn.q_proj.weight": "model-00015-of-00016.safetensors", "model.layers.5.self_attn.q_proj.weight": "model-00015-of-00016.safetensors", "model.layers.45.self_attn.v_proj.weight": "model-00015-of-00016.safetensors", "model.layers.5.self_attn.v_proj.weight": "model-00015-of-00016.safetensors", "model.layers.46.input_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.6.input_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.46.mlp.down_proj.weight": "model-00015-of-00016.safetensors", "model.layers.6.mlp.down_proj.weight": "model-00015-of-00016.safetensors", "model.layers.46.mlp.gate_proj.weight": "model-00015-of-00016.safetensors", "model.layers.6.mlp.gate_proj.weight": "model-00015-of-00016.safetensors", "model.layers.46.mlp.up_proj.weight": "model-00015-of-00016.safetensors", "model.layers.6.mlp.up_proj.weight": "model-00015-of-00016.safetensors", "model.layers.46.post_attention_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.6.post_attention_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.46.self_attn.k_proj.weight": "model-00015-of-00016.safetensors", "model.layers.6.self_attn.k_proj.weight": "model-00015-of-00016.safetensors", "model.layers.46.self_attn.o_proj.weight": "model-00015-of-00016.safetensors", "model.layers.6.self_attn.o_proj.weight": "model-00015-of-00016.safetensors", "model.layers.46.self_attn.q_proj.weight": "model-00015-of-00016.safetensors", "model.layers.6.self_attn.q_proj.weight": "model-00015-of-00016.safetensors", "model.layers.46.self_attn.v_proj.weight": "model-00015-of-00016.safetensors", "model.layers.6.self_attn.v_proj.weight": "model-00015-of-00016.safetensors", "model.layers.47.input_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.7.input_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.47.mlp.down_proj.weight": "model-00015-of-00016.safetensors", "model.layers.7.mlp.down_proj.weight": "model-00015-of-00016.safetensors", "model.layers.47.mlp.gate_proj.weight": "model-00015-of-00016.safetensors", "model.layers.7.mlp.gate_proj.weight": "model-00015-of-00016.safetensors", "model.layers.47.mlp.up_proj.weight": "model-00015-of-00016.safetensors", "model.layers.7.mlp.up_proj.weight": "model-00015-of-00016.safetensors", "model.layers.47.post_attention_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.7.post_attention_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.47.self_attn.k_proj.weight": "model-00015-of-00016.safetensors", "model.layers.7.self_attn.k_proj.weight": "model-00015-of-00016.safetensors", "model.layers.47.self_attn.o_proj.weight": "model-00015-of-00016.safetensors", "model.layers.7.self_attn.o_proj.weight": "model-00015-of-00016.safetensors", "model.layers.47.self_attn.q_proj.weight": "model-00015-of-00016.safetensors", "model.layers.7.self_attn.q_proj.weight": "model-00015-of-00016.safetensors", "model.layers.47.self_attn.v_proj.weight": "model-00015-of-00016.safetensors", "model.layers.7.self_attn.v_proj.weight": "model-00015-of-00016.safetensors", "model.layers.48.input_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.8.input_layernorm.weight": "model-00015-of-00016.safetensors", "model.layers.48.mlp.down_proj.weight": "model-00015-of-00016.safetensors", "model.layers.8.mlp.down_proj.weight": "model-00016-of-00016.safetensors", "model.layers.48.mlp.gate_proj.weight": "model-00016-of-00016.safetensors", "model.layers.8.mlp.gate_proj.weight": "model-00016-of-00016.safetensors", "model.layers.48.mlp.up_proj.weight": "model-00016-of-00016.safetensors", "model.layers.8.mlp.up_proj.weight": "model-00016-of-00016.safetensors", "model.layers.48.post_attention_layernorm.weight": "model-00016-of-00016.safetensors", "model.layers.8.post_attention_layernorm.weight": "model-00016-of-00016.safetensors", "model.layers.48.self_attn.k_proj.weight": "model-00016-of-00016.safetensors", "model.layers.8.self_attn.k_proj.weight": "model-00016-of-00016.safetensors", "model.layers.48.self_attn.o_proj.weight": "model-00016-of-00016.safetensors", "model.layers.8.self_attn.o_proj.weight": "model-00016-of-00016.safetensors", "model.layers.48.self_attn.q_proj.weight": "model-00016-of-00016.safetensors", "model.layers.8.self_attn.q_proj.weight": "model-00016-of-00016.safetensors", "model.layers.48.self_attn.v_proj.weight": "model-00016-of-00016.safetensors", "model.layers.8.self_attn.v_proj.weight": "model-00016-of-00016.safetensors", "model.layers.49.input_layernorm.weight": "model-00016-of-00016.safetensors", "model.layers.9.input_layernorm.weight": "model-00016-of-00016.safetensors", "model.layers.49.mlp.down_proj.weight": "model-00016-of-00016.safetensors", "model.layers.9.mlp.down_proj.weight": "model-00016-of-00016.safetensors", "model.layers.49.mlp.gate_proj.weight": "model-00016-of-00016.safetensors", "model.layers.9.mlp.gate_proj.weight": "model-00016-of-00016.safetensors", "model.layers.49.mlp.up_proj.weight": "model-00016-of-00016.safetensors", "model.layers.9.mlp.up_proj.weight": "model-00016-of-00016.safetensors", "model.layers.49.post_attention_layernorm.weight": "model-00016-of-00016.safetensors", "model.layers.9.post_attention_layernorm.weight": "model-00016-of-00016.safetensors", "model.layers.49.self_attn.k_proj.weight": "model-00016-of-00016.safetensors", "model.layers.9.self_attn.k_proj.weight": "model-00016-of-00016.safetensors", "model.layers.49.self_attn.o_proj.weight": "model-00016-of-00016.safetensors", "model.layers.9.self_attn.o_proj.weight": "model-00016-of-00016.safetensors", "model.layers.49.self_attn.q_proj.weight": "model-00016-of-00016.safetensors", "model.layers.9.self_attn.q_proj.weight": "model-00016-of-00016.safetensors", "model.layers.49.self_attn.v_proj.weight": "model-00016-of-00016.safetensors", "model.layers.9.self_attn.v_proj.weight": "model-00016-of-00016.safetensors", "model.norm.weight": "model-00016-of-00016.safetensors"}}
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443