SandLogicTechnologies
commited on
Commit
•
bf7b066
1
Parent(s):
502ee27
Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ These quantized models offer improved efficiency while maintaining performance.
|
|
23 |
## Original Model Information
|
24 |
|
25 |
- **Name**: [Nxcode-CQ-7B-orpo](https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo)
|
26 |
-
- **Base Model**: Qwen/CodeQwen1.5-7B
|
27 |
- **Fine-tuning Approach**: Monolithic Preference Optimization without Reference Model
|
28 |
- **Fine-tuning Data**: 100k samples of high-quality ranking data
|
29 |
- **Model Type**: Transformer-based decoder-only language model
|
|
|
23 |
## Original Model Information
|
24 |
|
25 |
- **Name**: [Nxcode-CQ-7B-orpo](https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo)
|
26 |
+
- **Base Model**: [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
|
27 |
- **Fine-tuning Approach**: Monolithic Preference Optimization without Reference Model
|
28 |
- **Fine-tuning Data**: 100k samples of high-quality ranking data
|
29 |
- **Model Type**: Transformer-based decoder-only language model
|