ycjcl868 commited on
Commit
d636ed0
1 Parent(s): e2cc49b

feat: update model

Browse files
README.md DELETED
@@ -1,201 +0,0 @@
1
- ---
2
- license: gemma
3
- widget:
4
- - messages:
5
- - role: user
6
- content: 大脑是如何工作的?
7
- inference:
8
- parameters:
9
- max_new_tokens: 200
10
- language:
11
- - en
12
- - zh
13
- library_name: transformers
14
- pipeline_tag: text-generation
15
- tags:
16
- - gemma
17
- - chinese
18
- - sft
19
- ---
20
-
21
- # Updates
22
- - [May 23, 2024] 🔥 support function calling
23
-
24
-
25
- # Model Summary
26
-
27
- [Gemma-1.1-7B-Chinese-Chat](https://huggingface.co/ycjcl868/Gemma-1.1-7B-Chinese-Chat) is an instruction-tuned language model for Chinese & English users built upon the gemma-1.1-7b-it model.([Github](https://github.com/ycjcl868/Gemma-1.1-7B-Chinese-Chat/tree/main))
28
-
29
- Developed by: [ycjcl868](https://github.com/ycjcl868)
30
-
31
- - License: [Gemma License](https://www.kaggle.com/models/google/gemma/license/consent)
32
- - Base Model: gemma-1.1-7b-it
33
- - Model Size: 8.54B
34
- - Context length: 8K
35
-
36
- # Introduction
37
-
38
- This is the first model specifically fine-tuned for Chinese & English user through SFT based on the [gemma-1.1-7b-it model](https://huggingface.co/google/gemma-1.1-7b-it).
39
-
40
- **Compared to the original [gemma-1.1-7b-it model](https://huggingface.co/google/gemma-1.1-7b-it), our Gemma-1.1-7B-Chinese-Chat model significantly reduces the issues of "Chinese questions with English answers" and the mixing of Chinese and English in responses.**
41
-
42
- Training details:
43
-
44
- - epochs: 3
45
- - learning rate: 0.0001
46
- - learning rate scheduler type: cosine
47
- - Warmup ratio: 0.1
48
- - cutoff len (i.e. context length): 8192
49
- - global batch size: 2
50
-
51
- <br />
52
-
53
- # Usage
54
-
55
- ## Chat Template
56
-
57
- The instruction-tuned models use a chat template that must be adhered to for conversational use.
58
- The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
59
-
60
- Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
61
-
62
- ```py
63
- from transformers import AutoTokenizer, AutoModelForCausalLM
64
- import transformers
65
- import torch
66
-
67
- model_id = "ycjcl868/Gemma-1.1-7B-Chinese-Chat"
68
-
69
- tokenizer = AutoTokenizer.from_pretrained(model_id)
70
- model = AutoModelForCausalLM.from_pretrained(
71
- model_id,
72
- device_map="auto",
73
- torch_dtype="auto",
74
- )
75
-
76
- messages = [
77
- { "role": "user", "content": "写一首诗" },
78
- ]
79
- input_ids = tokenizer.apply_chat_template(
80
- messages, add_generation_prompt=True, return_tensors="pt"
81
- ).to(model.device)
82
-
83
- outputs = model.generate(
84
- input_ids,
85
- max_new_tokens=8192,
86
- do_sample=True,
87
- temperature=0.6,
88
- top_p=0.9,
89
- )
90
- response = outputs[0][input_ids.shape[-1]:]
91
- print(tokenizer.decode(response, skip_special_tokens=True))
92
-
93
- # 春风拂柳,春雨轻轻,
94
- # 花香满园,鸟鸣清脆。
95
- # 一朵红梅,红彤彤的,
96
- # 美不胜收,美不胜收。
97
- ```
98
-
99
- # Examples
100
-
101
- ```
102
- User: 参加四年级数学竞赛同学中,男同学获奖人数比女同学多2人,女同学获奖人数比男同学人数的一半多2人,男女同学各有多少人获奖?
103
- Assistant:
104
- 设男同学的获奖人数
105
- 为x,则女同学的获奖人数为y,所以有以下两式:
106
- x=y+2
107
- y=2(x+2)
108
- 将第一式代入第二式中,得:
109
- y=2(y+2)+2
110
- y=2y+4+2
111
- y=2y+6
112
- 2y=y+6
113
- y=6
114
- 因此,女同学获奖人数为6人,男同学获奖人数为8人。
115
- ```
116
-
117
- ```
118
- User: 我的蓝牙耳机坏了,我应该去看耳鼻喉科还是牙科?
119
- Assistant: 蓝牙耳机是属于电子产品的,而不是耳鼻喉科或牙科的范畴。如果你怀疑耳机有故障,建议去电子产品维修店或者原厂维修中心进行检查。如果需要新的耳机,你也可以去电器商店或电子产品市场购买。
120
- ```
121
-
122
- ```
123
- User: 来几句成语
124
- Assistant: “一日三尺”是指过时间一切都会改变。
125
- ```
126
-
127
- ```
128
- User: 列出5种可以改善睡眠质量的方法
129
- Assistant:
130
- 1. 保持规律的睡眠时间表
131
- 2. 避免下午摄入咖啡因
132
- 3. 避免睡前摄入酒精
133
- 4. 避免睡前摄入高糖饮食
134
- 5. 每天至少进行30分钟的运动
135
- ```
136
-
137
- ```
138
- User: 回答一个开放性的问题:人类存在的意义是什么?
139
- Assistant: 人类存在的意义是一个复杂的哲学和哲学问题,因为人类存在于自然界中,我们作为人类有责任和义务来保护地球和生物,而我们作为人类也有责任来发展自己的精神和思想,来创造更好的未来。
140
- 我从哲学角度来看,人类存在的意义是通过我们自己的思想和行动,来实现我们对精神和精神的追求。通过我们自己的努力和创造力,我们可以创造出更多的艺术、文化和思想,来促进人类的进步和发展。
141
- ```
142
-
143
- ```
144
- User: 解释以下缩写: JSON、CEO、PDF、HTML、NASA
145
- Assistant:
146
- JSON(JavaScript Object Notation):JavaScript对象的表示法,用来在JavaScript和许多其他编程语言之间传递数据。
147
- CEO(Chief Executive Officer):首席执行官,指企业的高级管理人员,通常是指企业的主席或总裁。
148
- PDF(Portable Document Format):可移植文档格式,是一种电子文档格式,可以确保文档在不同的操作系统���设备上都能以一致的方式呈现。
149
- HTML(HyperText Markup Language):超文本标记语言,是网页内容的标记语言,用来定义网页的结构和内容。
150
- NASA(National Aeronautics and Space Administration):美国国家航空航天局,是美国政府的宇航机构,负责美国在太空和航空方面的研究和发展。
151
- ```
152
-
153
- ## Function call
154
-
155
- **User**:
156
- ````
157
- 以下是您可以使用的工具列表:
158
-
159
- ```python
160
- def internet_search(query: str):
161
- \"\"\"
162
- Returns a list of relevant document snippets for a textual query retrieved from the internet
163
-
164
- Args:
165
- query (str): Query to search the internet with
166
- \"\"\"
167
- pass
168
- ```
169
-
170
- ```python
171
- def directly_answer():
172
- \"\"\"
173
- Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
174
- \"\"\"
175
- pass
176
- ```
177
-
178
- 写 'Action:',后跟要调用的 JSON 中的操作列表,例如.
179
- Action:
180
- ```json
181
- [
182
- {
183
- "tool_name": "tool name (one of [internet_search, directly_answer])",
184
- "parameters": "the input to the tool"
185
- }
186
- ]
187
- ```
188
-
189
- 帮我找到今天的新闻有哪些:
190
- ````
191
-
192
- **Response**:
193
- ```
194
- Action:
195
- [
196
- {
197
- "tool_name": "internet_search",
198
- "parameters": "今天有哪些新闻"
199
- }
200
- ]
201
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "/root/llms/gemma-1-1-7b-instruct/gemma-1-1-7b-it",
3
  "architectures": [
4
  "GemmaForCausalLM"
5
  ],
@@ -22,7 +22,7 @@
22
  "rms_norm_eps": 1e-06,
23
  "rope_theta": 10000.0,
24
  "torch_dtype": "bfloat16",
25
- "transformers_version": "4.41.0",
26
  "use_cache": true,
27
  "vocab_size": 256000
28
  }
 
1
  {
2
+ "_name_or_path": "google/gemma-1.1-7b-it",
3
  "architectures": [
4
  "GemmaForCausalLM"
5
  ],
 
22
  "rms_norm_eps": 1e-06,
23
  "rope_theta": 10000.0,
24
  "torch_dtype": "bfloat16",
25
+ "transformers_version": "4.41.2",
26
  "use_cache": true,
27
  "vocab_size": 256000
28
  }
generation_config.json CHANGED
@@ -3,5 +3,5 @@
3
  "bos_token_id": 2,
4
  "eos_token_id": 1,
5
  "pad_token_id": 0,
6
- "transformers_version": "4.41.0"
7
  }
 
3
  "bos_token_id": 2,
4
  "eos_token_id": 1,
5
  "pad_token_id": 0,
6
+ "transformers_version": "4.41.2"
7
  }
model-00004-of-00009.safetensors → model-00001-of-00004.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8ec04307bedc635948bb1276de718efba11562372c062ca484029249aea1723
3
- size 1912655720
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a862ac690c005bb7952aa8b2123132369f3b1521450dfbd6a83a5b7ae7e80a5
3
+ size 4995496656
model-00001-of-00009.safetensors → model-00002-of-00004.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d9b5b68ea16c8a48f7ee84b8ca00c4d98c3b931ced84a03bda9d43df657bd1f0
3
- size 1975518024
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac65c6ab04cd1bd5a2e1e6edd21bab00089afc109b6febe1b57b57b1cfc09090
3
+ size 4982953168
model-00002-of-00009.safetensors → model-00003-of-00004.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:528cceaf066a9f75c17e8b7c86ca33b135e8e8deccaedb8823d2b52ff92735a1
3
- size 1912655712
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05cede5d200b44e8c2009390967c0511effc20acdf6e17b7d9fe665c43380de1
3
+ size 4982953200
model-00003-of-00009.safetensors → model-00004-of-00004.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8735a0e8f7a534c4b8869c6fc56ad477f9da322ddca36789df253ac05f22f123
3
- size 1962974488
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:059b3fecd6440b5eb7e62ddd9a1d8113e4483b2e6168b159581d1f89a3e7a287
3
+ size 2113988336
model-00005-of-00009.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8c50c9bffb87c569c7b37dff85fdc59d754444bdeab0984001fbb43868dc7cde
3
- size 1962974520
 
 
 
 
model-00006-of-00009.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a86e62152305a3a4da9f3c8626047ad6e273c37296d28a8c497d4c1f396fa202
3
- size 1912655744
 
 
 
 
model-00007-of-00009.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e3f535ed95f49f38974fef902e3a68df06c29e51797533b373ca0077c3dbcd07
3
- size 1962974520
 
 
 
 
model-00008-of-00009.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c585b6531a96f94d0f58f5ffbb83610b250700cd0aa06f9400be875ba00e213b
3
- size 1912655744
 
 
 
 
model-00009-of-00009.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e749dc41d643a9343a1f538ccc1e8f2fcf411fe21ac5769e4d9f2af095ac7fb5
3
- size 1560326864
 
 
 
 
model.safetensors.index.json CHANGED
@@ -3,259 +3,259 @@
3
  "total_size": 17075361792
4
  },
5
  "weight_map": {
6
- "model.embed_tokens.weight": "model-00001-of-00009.safetensors",
7
- "model.layers.0.input_layernorm.weight": "model-00002-of-00009.safetensors",
8
- "model.layers.0.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
9
- "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
10
- "model.layers.0.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
11
- "model.layers.0.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
12
- "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
13
- "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
14
- "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
15
- "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
16
- "model.layers.1.input_layernorm.weight": "model-00002-of-00009.safetensors",
17
- "model.layers.1.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
18
- "model.layers.1.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
19
- "model.layers.1.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
20
- "model.layers.1.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
21
- "model.layers.1.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
22
- "model.layers.1.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
23
- "model.layers.1.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
24
- "model.layers.1.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
25
- "model.layers.10.input_layernorm.weight": "model-00004-of-00009.safetensors",
26
- "model.layers.10.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
27
- "model.layers.10.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
28
- "model.layers.10.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
29
- "model.layers.10.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
30
- "model.layers.10.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
31
- "model.layers.10.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
32
- "model.layers.10.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
33
- "model.layers.10.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
34
- "model.layers.11.input_layernorm.weight": "model-00005-of-00009.safetensors",
35
- "model.layers.11.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
36
- "model.layers.11.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
37
- "model.layers.11.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
38
- "model.layers.11.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
39
- "model.layers.11.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
40
- "model.layers.11.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
41
- "model.layers.11.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
42
- "model.layers.11.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
43
- "model.layers.12.input_layernorm.weight": "model-00005-of-00009.safetensors",
44
- "model.layers.12.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
45
- "model.layers.12.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
46
- "model.layers.12.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
47
- "model.layers.12.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
48
- "model.layers.12.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
49
- "model.layers.12.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
50
- "model.layers.12.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
51
- "model.layers.12.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
52
- "model.layers.13.input_layernorm.weight": "model-00005-of-00009.safetensors",
53
- "model.layers.13.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
54
- "model.layers.13.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
55
- "model.layers.13.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
56
- "model.layers.13.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
57
- "model.layers.13.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
58
- "model.layers.13.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
59
- "model.layers.13.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
60
- "model.layers.13.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
61
- "model.layers.14.input_layernorm.weight": "model-00006-of-00009.safetensors",
62
- "model.layers.14.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
63
- "model.layers.14.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
64
- "model.layers.14.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
65
- "model.layers.14.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
66
- "model.layers.14.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
67
- "model.layers.14.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
68
- "model.layers.14.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
69
- "model.layers.14.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
70
- "model.layers.15.input_layernorm.weight": "model-00006-of-00009.safetensors",
71
- "model.layers.15.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
72
- "model.layers.15.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
73
- "model.layers.15.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
74
- "model.layers.15.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
75
- "model.layers.15.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
76
- "model.layers.15.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
77
- "model.layers.15.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
78
- "model.layers.15.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
79
- "model.layers.16.input_layernorm.weight": "model-00006-of-00009.safetensors",
80
- "model.layers.16.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
81
- "model.layers.16.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
82
- "model.layers.16.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
83
- "model.layers.16.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
84
- "model.layers.16.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
85
- "model.layers.16.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
86
- "model.layers.16.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
87
- "model.layers.16.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
88
- "model.layers.17.input_layernorm.weight": "model-00006-of-00009.safetensors",
89
- "model.layers.17.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
90
- "model.layers.17.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
91
- "model.layers.17.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
92
- "model.layers.17.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
93
- "model.layers.17.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
94
- "model.layers.17.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
95
- "model.layers.17.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
96
- "model.layers.17.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
97
- "model.layers.18.input_layernorm.weight": "model-00007-of-00009.safetensors",
98
- "model.layers.18.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
99
- "model.layers.18.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
100
- "model.layers.18.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
101
- "model.layers.18.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
102
- "model.layers.18.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
103
- "model.layers.18.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
104
- "model.layers.18.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
105
- "model.layers.18.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
106
- "model.layers.19.input_layernorm.weight": "model-00007-of-00009.safetensors",
107
- "model.layers.19.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
108
- "model.layers.19.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
109
- "model.layers.19.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
110
- "model.layers.19.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
111
- "model.layers.19.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
112
- "model.layers.19.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
113
- "model.layers.19.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
114
- "model.layers.19.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
115
- "model.layers.2.input_layernorm.weight": "model-00002-of-00009.safetensors",
116
- "model.layers.2.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
117
- "model.layers.2.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
118
- "model.layers.2.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
119
- "model.layers.2.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
120
- "model.layers.2.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
121
- "model.layers.2.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
122
- "model.layers.2.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
123
- "model.layers.2.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
124
- "model.layers.20.input_layernorm.weight": "model-00007-of-00009.safetensors",
125
- "model.layers.20.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
126
- "model.layers.20.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
127
- "model.layers.20.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
128
- "model.layers.20.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
129
- "model.layers.20.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
130
- "model.layers.20.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
131
- "model.layers.20.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
132
- "model.layers.20.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
133
- "model.layers.21.input_layernorm.weight": "model-00008-of-00009.safetensors",
134
- "model.layers.21.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
135
- "model.layers.21.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
136
- "model.layers.21.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
137
- "model.layers.21.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
138
- "model.layers.21.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
139
- "model.layers.21.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
140
- "model.layers.21.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
141
- "model.layers.21.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
142
- "model.layers.22.input_layernorm.weight": "model-00008-of-00009.safetensors",
143
- "model.layers.22.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
144
- "model.layers.22.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
145
- "model.layers.22.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
146
- "model.layers.22.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
147
- "model.layers.22.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
148
- "model.layers.22.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
149
- "model.layers.22.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
150
- "model.layers.22.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
151
- "model.layers.23.input_layernorm.weight": "model-00008-of-00009.safetensors",
152
- "model.layers.23.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
153
- "model.layers.23.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
154
- "model.layers.23.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
155
- "model.layers.23.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
156
- "model.layers.23.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
157
- "model.layers.23.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
158
- "model.layers.23.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
159
- "model.layers.23.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
160
- "model.layers.24.input_layernorm.weight": "model-00008-of-00009.safetensors",
161
- "model.layers.24.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
162
- "model.layers.24.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
163
- "model.layers.24.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
164
- "model.layers.24.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
165
- "model.layers.24.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
166
- "model.layers.24.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
167
- "model.layers.24.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
168
- "model.layers.24.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
169
- "model.layers.25.input_layernorm.weight": "model-00009-of-00009.safetensors",
170
- "model.layers.25.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
171
- "model.layers.25.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
172
- "model.layers.25.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
173
- "model.layers.25.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
174
- "model.layers.25.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
175
- "model.layers.25.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
176
- "model.layers.25.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
177
- "model.layers.25.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
178
- "model.layers.26.input_layernorm.weight": "model-00009-of-00009.safetensors",
179
- "model.layers.26.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
180
- "model.layers.26.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
181
- "model.layers.26.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
182
- "model.layers.26.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
183
- "model.layers.26.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
184
- "model.layers.26.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
185
- "model.layers.26.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
186
- "model.layers.26.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
187
- "model.layers.27.input_layernorm.weight": "model-00009-of-00009.safetensors",
188
- "model.layers.27.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
189
- "model.layers.27.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
190
- "model.layers.27.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
191
- "model.layers.27.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
192
- "model.layers.27.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
193
- "model.layers.27.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
194
- "model.layers.27.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
195
- "model.layers.27.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
196
- "model.layers.3.input_layernorm.weight": "model-00002-of-00009.safetensors",
197
- "model.layers.3.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
198
- "model.layers.3.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
199
- "model.layers.3.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
200
- "model.layers.3.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
201
- "model.layers.3.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
202
- "model.layers.3.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
203
- "model.layers.3.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
204
- "model.layers.3.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
205
- "model.layers.4.input_layernorm.weight": "model-00003-of-00009.safetensors",
206
- "model.layers.4.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
207
- "model.layers.4.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
208
- "model.layers.4.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
209
- "model.layers.4.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
210
- "model.layers.4.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
211
- "model.layers.4.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
212
- "model.layers.4.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
213
- "model.layers.4.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
214
- "model.layers.5.input_layernorm.weight": "model-00003-of-00009.safetensors",
215
- "model.layers.5.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
216
- "model.layers.5.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
217
- "model.layers.5.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
218
- "model.layers.5.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
219
- "model.layers.5.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
220
- "model.layers.5.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
221
- "model.layers.5.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
222
- "model.layers.5.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
223
- "model.layers.6.input_layernorm.weight": "model-00003-of-00009.safetensors",
224
- "model.layers.6.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
225
- "model.layers.6.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
226
- "model.layers.6.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
227
- "model.layers.6.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
228
- "model.layers.6.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
229
- "model.layers.6.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
230
- "model.layers.6.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
231
- "model.layers.6.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
232
- "model.layers.7.input_layernorm.weight": "model-00004-of-00009.safetensors",
233
- "model.layers.7.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
234
- "model.layers.7.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
235
- "model.layers.7.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
236
- "model.layers.7.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
237
- "model.layers.7.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
238
- "model.layers.7.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
239
- "model.layers.7.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
240
- "model.layers.7.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
241
- "model.layers.8.input_layernorm.weight": "model-00004-of-00009.safetensors",
242
- "model.layers.8.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
243
- "model.layers.8.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
244
- "model.layers.8.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
245
- "model.layers.8.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
246
- "model.layers.8.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
247
- "model.layers.8.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
248
- "model.layers.8.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
249
- "model.layers.8.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
250
- "model.layers.9.input_layernorm.weight": "model-00004-of-00009.safetensors",
251
- "model.layers.9.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
252
- "model.layers.9.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
253
- "model.layers.9.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
254
- "model.layers.9.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
255
- "model.layers.9.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
256
- "model.layers.9.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
257
- "model.layers.9.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
258
- "model.layers.9.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
259
- "model.norm.weight": "model-00009-of-00009.safetensors"
260
  }
261
  }
 
3
  "total_size": 17075361792
4
  },
5
  "weight_map": {
6
+ "model.embed_tokens.weight": "model-00001-of-00004.safetensors",
7
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
8
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
9
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
10
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
11
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
12
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
13
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
14
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
15
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
16
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
17
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
18
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
19
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
20
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
21
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
22
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
23
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
24
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
25
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
26
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
27
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
28
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
29
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
30
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
31
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
32
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
33
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
34
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
35
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
36
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
37
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
38
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
39
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
40
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
41
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
42
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
43
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
44
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
45
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
46
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
47
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
48
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
49
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
50
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
51
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
52
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
53
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
54
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
55
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
56
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
57
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
58
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
59
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
60
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
61
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
62
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
63
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
64
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
65
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
66
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
67
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
68
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
69
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
70
+ "model.layers.15.input_layernorm.weight": "model-00003-of-00004.safetensors",
71
+ "model.layers.15.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
72
+ "model.layers.15.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
73
+ "model.layers.15.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
74
+ "model.layers.15.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
75
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
76
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
77
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
78
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
79
+ "model.layers.16.input_layernorm.weight": "model-00003-of-00004.safetensors",
80
+ "model.layers.16.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
81
+ "model.layers.16.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
82
+ "model.layers.16.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
83
+ "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
84
+ "model.layers.16.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
85
+ "model.layers.16.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
86
+ "model.layers.16.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
87
+ "model.layers.16.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
88
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00004.safetensors",
89
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
90
+ "model.layers.17.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
91
+ "model.layers.17.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
92
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
93
+ "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
94
+ "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
95
+ "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
96
+ "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
97
+ "model.layers.18.input_layernorm.weight": "model-00003-of-00004.safetensors",
98
+ "model.layers.18.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
99
+ "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
100
+ "model.layers.18.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
101
+ "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
102
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
103
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
104
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
105
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
106
+ "model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
107
+ "model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
108
+ "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
109
+ "model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
110
+ "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
111
+ "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
112
+ "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
113
+ "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
114
+ "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
115
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
116
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
117
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
118
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
119
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
120
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
121
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
122
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
123
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
124
+ "model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
125
+ "model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
126
+ "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
127
+ "model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
128
+ "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
129
+ "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
130
+ "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
131
+ "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
132
+ "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
133
+ "model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
134
+ "model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
135
+ "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
136
+ "model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
137
+ "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
138
+ "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
139
+ "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
140
+ "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
141
+ "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
142
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
143
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
144
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
145
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
146
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
147
+ "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
148
+ "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
149
+ "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
150
+ "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
151
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
152
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
153
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
154
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
155
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
156
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
157
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
158
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
159
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
160
+ "model.layers.24.input_layernorm.weight": "model-00004-of-00004.safetensors",
161
+ "model.layers.24.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
162
+ "model.layers.24.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
163
+ "model.layers.24.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
164
+ "model.layers.24.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
165
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
166
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
167
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
168
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
169
+ "model.layers.25.input_layernorm.weight": "model-00004-of-00004.safetensors",
170
+ "model.layers.25.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
171
+ "model.layers.25.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
172
+ "model.layers.25.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
173
+ "model.layers.25.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
174
+ "model.layers.25.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
175
+ "model.layers.25.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
176
+ "model.layers.25.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
177
+ "model.layers.25.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
178
+ "model.layers.26.input_layernorm.weight": "model-00004-of-00004.safetensors",
179
+ "model.layers.26.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
180
+ "model.layers.26.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
181
+ "model.layers.26.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
182
+ "model.layers.26.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
183
+ "model.layers.26.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
184
+ "model.layers.26.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
185
+ "model.layers.26.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
186
+ "model.layers.26.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
187
+ "model.layers.27.input_layernorm.weight": "model-00004-of-00004.safetensors",
188
+ "model.layers.27.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
189
+ "model.layers.27.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
190
+ "model.layers.27.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
191
+ "model.layers.27.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
192
+ "model.layers.27.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
193
+ "model.layers.27.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
194
+ "model.layers.27.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
195
+ "model.layers.27.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
196
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
197
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
198
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
199
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
200
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
201
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
202
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
203
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
204
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
205
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
206
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
207
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
208
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
209
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
210
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
211
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
212
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
213
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
214
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
215
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
216
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
217
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
218
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
219
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
220
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
221
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
222
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
223
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00004.safetensors",
224
+ "model.layers.6.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
225
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
226
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
227
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
228
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
229
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
230
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
231
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
232
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
233
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
234
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
235
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
236
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
237
+ "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
238
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
239
+ "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
240
+ "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
241
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
242
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
243
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
244
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
245
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
246
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
247
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
248
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
249
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
250
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
251
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
252
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
253
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
254
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
255
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
256
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
257
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
258
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
259
+ "model.norm.weight": "model-00004-of-00004.safetensors"
260
  }
261
  }
tokenizer_config.json CHANGED
@@ -1114,246 +1114,6 @@
1114
  "single_word": false,
1115
  "special": false
1116
  },
1117
- "139": {
1118
- "content": "▁▁",
1119
- "lstrip": false,
1120
- "normalized": false,
1121
- "rstrip": false,
1122
- "single_word": false,
1123
- "special": false
1124
- },
1125
- "140": {
1126
- "content": "▁▁▁",
1127
- "lstrip": false,
1128
- "normalized": false,
1129
- "rstrip": false,
1130
- "single_word": false,
1131
- "special": false
1132
- },
1133
- "141": {
1134
- "content": "▁▁▁▁",
1135
- "lstrip": false,
1136
- "normalized": false,
1137
- "rstrip": false,
1138
- "single_word": false,
1139
- "special": false
1140
- },
1141
- "142": {
1142
- "content": "▁▁▁▁▁",
1143
- "lstrip": false,
1144
- "normalized": false,
1145
- "rstrip": false,
1146
- "single_word": false,
1147
- "special": false
1148
- },
1149
- "143": {
1150
- "content": "▁▁▁▁▁▁",
1151
- "lstrip": false,
1152
- "normalized": false,
1153
- "rstrip": false,
1154
- "single_word": false,
1155
- "special": false
1156
- },
1157
- "144": {
1158
- "content": "▁▁▁▁▁▁▁",
1159
- "lstrip": false,
1160
- "normalized": false,
1161
- "rstrip": false,
1162
- "single_word": false,
1163
- "special": false
1164
- },
1165
- "145": {
1166
- "content": "▁▁▁▁▁▁▁▁",
1167
- "lstrip": false,
1168
- "normalized": false,
1169
- "rstrip": false,
1170
- "single_word": false,
1171
- "special": false
1172
- },
1173
- "146": {
1174
- "content": "▁▁▁▁▁▁▁▁▁",
1175
- "lstrip": false,
1176
- "normalized": false,
1177
- "rstrip": false,
1178
- "single_word": false,
1179
- "special": false
1180
- },
1181
- "147": {
1182
- "content": "▁▁▁▁▁▁▁▁▁▁",
1183
- "lstrip": false,
1184
- "normalized": false,
1185
- "rstrip": false,
1186
- "single_word": false,
1187
- "special": false
1188
- },
1189
- "148": {
1190
- "content": "▁▁▁▁▁▁▁▁▁▁▁",
1191
- "lstrip": false,
1192
- "normalized": false,
1193
- "rstrip": false,
1194
- "single_word": false,
1195
- "special": false
1196
- },
1197
- "149": {
1198
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁",
1199
- "lstrip": false,
1200
- "normalized": false,
1201
- "rstrip": false,
1202
- "single_word": false,
1203
- "special": false
1204
- },
1205
- "150": {
1206
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁",
1207
- "lstrip": false,
1208
- "normalized": false,
1209
- "rstrip": false,
1210
- "single_word": false,
1211
- "special": false
1212
- },
1213
- "151": {
1214
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1215
- "lstrip": false,
1216
- "normalized": false,
1217
- "rstrip": false,
1218
- "single_word": false,
1219
- "special": false
1220
- },
1221
- "152": {
1222
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1223
- "lstrip": false,
1224
- "normalized": false,
1225
- "rstrip": false,
1226
- "single_word": false,
1227
- "special": false
1228
- },
1229
- "153": {
1230
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1231
- "lstrip": false,
1232
- "normalized": false,
1233
- "rstrip": false,
1234
- "single_word": false,
1235
- "special": false
1236
- },
1237
- "154": {
1238
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1239
- "lstrip": false,
1240
- "normalized": false,
1241
- "rstrip": false,
1242
- "single_word": false,
1243
- "special": false
1244
- },
1245
- "155": {
1246
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1247
- "lstrip": false,
1248
- "normalized": false,
1249
- "rstrip": false,
1250
- "single_word": false,
1251
- "special": false
1252
- },
1253
- "156": {
1254
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1255
- "lstrip": false,
1256
- "normalized": false,
1257
- "rstrip": false,
1258
- "single_word": false,
1259
- "special": false
1260
- },
1261
- "157": {
1262
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1263
- "lstrip": false,
1264
- "normalized": false,
1265
- "rstrip": false,
1266
- "single_word": false,
1267
- "special": false
1268
- },
1269
- "158": {
1270
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1271
- "lstrip": false,
1272
- "normalized": false,
1273
- "rstrip": false,
1274
- "single_word": false,
1275
- "special": false
1276
- },
1277
- "159": {
1278
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1279
- "lstrip": false,
1280
- "normalized": false,
1281
- "rstrip": false,
1282
- "single_word": false,
1283
- "special": false
1284
- },
1285
- "160": {
1286
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1287
- "lstrip": false,
1288
- "normalized": false,
1289
- "rstrip": false,
1290
- "single_word": false,
1291
- "special": false
1292
- },
1293
- "161": {
1294
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1295
- "lstrip": false,
1296
- "normalized": false,
1297
- "rstrip": false,
1298
- "single_word": false,
1299
- "special": false
1300
- },
1301
- "162": {
1302
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1303
- "lstrip": false,
1304
- "normalized": false,
1305
- "rstrip": false,
1306
- "single_word": false,
1307
- "special": false
1308
- },
1309
- "163": {
1310
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1311
- "lstrip": false,
1312
- "normalized": false,
1313
- "rstrip": false,
1314
- "single_word": false,
1315
- "special": false
1316
- },
1317
- "164": {
1318
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1319
- "lstrip": false,
1320
- "normalized": false,
1321
- "rstrip": false,
1322
- "single_word": false,
1323
- "special": false
1324
- },
1325
- "165": {
1326
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1327
- "lstrip": false,
1328
- "normalized": false,
1329
- "rstrip": false,
1330
- "single_word": false,
1331
- "special": false
1332
- },
1333
- "166": {
1334
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1335
- "lstrip": false,
1336
- "normalized": false,
1337
- "rstrip": false,
1338
- "single_word": false,
1339
- "special": false
1340
- },
1341
- "167": {
1342
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1343
- "lstrip": false,
1344
- "normalized": false,
1345
- "rstrip": false,
1346
- "single_word": false,
1347
- "special": false
1348
- },
1349
- "168": {
1350
- "content": "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
1351
- "lstrip": false,
1352
- "normalized": false,
1353
- "rstrip": false,
1354
- "single_word": false,
1355
- "special": false
1356
- },
1357
  "169": {
1358
  "content": "<table>",
1359
  "lstrip": false,
 
1114
  "single_word": false,
1115
  "special": false
1116
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1117
  "169": {
1118
  "content": "<table>",
1119
  "lstrip": false,