shibing624 commited on
Commit
5ed4e62
1 Parent(s): 5230da3

Upload 2 files

Browse files

重新训练lora权重,适配新版chatglm-6b权重,update:1)调整token size数,减少了2000个token;2)更新了eos,pad token id,避免无限生成问题。

Files changed (2) hide show
  1. adapter_config.json +3 -3
  2. adapter_model.bin +1 -1
adapter_config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "base_model_name_or_path": "THUDM/chatglm-6b",
3
  "bias": "none",
4
  "enable_lora": [
5
  true,
@@ -8,8 +8,8 @@
8
  ],
9
  "fan_in_fan_out": false,
10
  "inference_mode": true,
11
- "lora_alpha": 32,
12
- "lora_dropout": 0.1,
13
  "merge_weights": false,
14
  "modules_to_save": null,
15
  "peft_type": "LORA",
 
1
  {
2
+ "base_model_name_or_path": "/home/flemingxu/disk/chatglm/chatglm_6b_new/",
3
  "bias": "none",
4
  "enable_lora": [
5
  true,
 
8
  ],
9
  "fan_in_fan_out": false,
10
  "inference_mode": true,
11
+ "lora_alpha": 16,
12
+ "lora_dropout": 0.05,
13
  "merge_weights": false,
14
  "modules_to_save": null,
15
  "peft_type": "LORA",
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c377a1ead2f27ed284eb16be60187c23cc0a1a50b303bdb2bebc3dbe9a653aad
3
  size 14700953
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1900587ce7c4c4d0a005e675e22a2c3ef343c39daf0213d95f3b8acf08a8aa97
3
  size 14700953