CyberNative
commited on
Commit
•
7eae523
1
Parent(s):
480a5ec
Update README.md
Browse files
README.md
CHANGED
@@ -3,28 +3,31 @@ license: llama2
|
|
3 |
---
|
4 |
<img src="https://huggingface.co/CyberNative/CyberBase/resolve/main/image.png" alt="CyberNative/CyberBase"/>
|
5 |
|
6 |
-
## THIS IS A PLACEHOLDER, MODEL COMMING SOON
|
7 |
-
|
8 |
CyberBase is an experimental *base model* for cybersecurity. (llama-2-13b -> lmsys/vicuna-13b-v1.5-16k -> CyberBase)
|
9 |
|
10 |
-
## Test run 1 (less context, more trainable params):
|
11 |
-
- sequence_len: 4096
|
12 |
-
- max_packed_sequence_len: 4096
|
13 |
-
- lora_r: 256
|
14 |
-
- lora_alpha: 128
|
15 |
-
- num_epochs: 3
|
16 |
-
- trainable params: 1,001,390,080 || all params: 14,017,264,640 || trainable%: 7.143976415643959
|
17 |
-
|
18 |
# Base cybersecurity model for future fine-tuning, it is not recomended to use on it's own.
|
19 |
- **CyberBase** is a [lmsys/vicuna-13b-v1.5-16k](https://huggingface.co/lmsys/vicuna-13b-v1.5-16k) QLORA fine-tuned on [CyberNative/github_cybersecurity_READMEs](https://huggingface.co/datasets/CyberNative/github_cybersecurity_READMEs) with a single 3090.
|
20 |
- It might, therefore, inherit [promp template of FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#prompt-template)
|
21 |
-
|
22 |
-
-
|
23 |
-
- **
|
|
|
|
|
24 |
- **num_epochs:** 3
|
25 |
- **gradient_accumulation_steps:** 2
|
26 |
- **micro_batch_size:** 1
|
27 |
- **flash_attention:** true (FlashAttention-2)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
30 |
|
|
|
3 |
---
|
4 |
<img src="https://huggingface.co/CyberNative/CyberBase/resolve/main/image.png" alt="CyberNative/CyberBase"/>
|
5 |
|
|
|
|
|
6 |
CyberBase is an experimental *base model* for cybersecurity. (llama-2-13b -> lmsys/vicuna-13b-v1.5-16k -> CyberBase)
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
# Base cybersecurity model for future fine-tuning, it is not recomended to use on it's own.
|
9 |
- **CyberBase** is a [lmsys/vicuna-13b-v1.5-16k](https://huggingface.co/lmsys/vicuna-13b-v1.5-16k) QLORA fine-tuned on [CyberNative/github_cybersecurity_READMEs](https://huggingface.co/datasets/CyberNative/github_cybersecurity_READMEs) with a single 3090.
|
10 |
- It might, therefore, inherit [promp template of FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#prompt-template)
|
11 |
+
|
12 |
+
## Fine-tuning information
|
13 |
+
- **sequence_len:** 4096 (used during fine-tuning, but should generate up to 16k)
|
14 |
+
- **lora_r:** 256
|
15 |
+
- **lora_alpha:** 128
|
16 |
- **num_epochs:** 3
|
17 |
- **gradient_accumulation_steps:** 2
|
18 |
- **micro_batch_size:** 1
|
19 |
- **flash_attention:** true (FlashAttention-2)
|
20 |
+
- trainable params: 1,001,390,080 || all params: 14,017,264,640 || trainable%: 7.143976415643959
|
21 |
+
|
22 |
+
### Tested with the following prompt and temperature=0.3:
|
23 |
+
<code>A chat between a cyber security red team lead (USER) and a general cyber security artificial intelligence assistant (ASSISTANT). The assistant knows everything about cyber security. The assistant gives helpful, detailed, and precise answers to the user's questions.<br>
|
24 |
+
<br>
|
25 |
+
USER: Hello! I need help with a penetration test.<br>
|
26 |
+
ASSISTANT: Hello! I'd be happy to help you with your penetration test. What specifically do you need help with?<br>
|
27 |
+
USER: Write me a plan for a penetration test. It should include first 5 steps and commands for each step.<br>
|
28 |
+
ASSISTANT:<br></code>
|
29 |
+
|
30 |
+
Join the discussion > https://cybernative.ai/t/cyberbase-devlog/1734
|
31 |
|
32 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
33 |
|