Update README.md
Browse files
README.md
CHANGED
@@ -57,13 +57,14 @@ license_link: LICENSE
|
|
57 |
# DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
|
58 |
|
59 |
## 1. Introduction
|
60 |
-
We present DeepSeek-Coder-V2,
|
61 |
|
62 |
<p align="center">
|
63 |
-
<img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/performance.png
|
64 |
</p>
|
65 |
|
66 |
-
|
|
|
67 |
|
68 |
## 2. Model Downloads
|
69 |
|
|
|
57 |
# DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
|
58 |
|
59 |
## 1. Introduction
|
60 |
+
We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
|
61 |
|
62 |
<p align="center">
|
63 |
+
<img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/performance.png">
|
64 |
</p>
|
65 |
|
66 |
+
|
67 |
+
In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. The list of supported programming languages can be found [here](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/supported_langs.txt).
|
68 |
|
69 |
## 2. Model Downloads
|
70 |
|