Edit model card

DeepSeek-Coder-V2-Lite-Base finetuned for 0.25 epochs on adamo1139/ise-uiuc_Magicoder-Evol-Instruct-110K-ShareGPT via llama-factory at 3000ctx with qlora, rank 32 and alpha 32.

Prompt format is ChatML but ChatML-specific tokens are not in the tokenizer, so it's sometimes spilling random tokens. Definitely something to fix in the next version.

It's an early WIP, unless you are dying to try DeepSeek-Coder-V2-Lite finetunes I suggest you don't use it :)

Downloads last month
79
GGUF
Model size
15.7B params
Architecture
deepseek2

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .