(Possibly) the Highest quality coding dataset on hugging face. And its free for you to use.

#11
by rombodawg - opened

I have created LimitlessCodeTraining, my most refined and filtered coding dataset. with over 640,000 lines of pure, high quality coding data. I invite you to further fine tune your model on this data, or add it to your own dataset and retrain your model with it.

Cheers,
Rombodawg

link:

Oh and Please make a 13b model. I know you guys dont want to, but It really helps people who dont have 3090/4090 gpus. Like my 3080 10gb can only run 13b param models when quantized in 4bit.

So please πŸ™πŸ™

Oh and Please make a 13b model. I know you guys dont want to, but It really helps people who dont have 3090/4090 gpus. Like my 3080 10gb can only run 13b param models when quantized in 4bit.

So please πŸ™πŸ™

My PC has 64GB of RAM and the GPU is 4080 16GB along with i7 13700K. I am using Ubuntu 22.04.

Both llama.cpp and oobabooga work surprisingly well with these 30B and 34B models. Here is the command I used for this CodeLlama, getting 3 tokens/s - quite happy about it!

./main -t 14 -ngl 32 -m models/phind-codellama-34b-v2.Q5_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -i -ins --batch-size 256

@5ven1 bruh 3 tokens per second? Some people have entire projects they need to work on with ai for coding and we need 15-20 tokens per second minimum.

As someone who has a 3090, even I like 13b models just because of how damn fast they run.

That said maybe EXL2 will save the day for you:
https://huggingface.co/latimar/Phind-Codellama-34B-v2-exl2

Just go for gptq 4 bit made by The Bloke, you only need 20G~ish VRAM, which stuffed the whole 3090, it is about 20tokens/s with exllama2 on textgen webui

@Yhyu13 I only have a 3080, and not everyone can afford to go out and buy a new graphics card at any day of the week

@5ven1 bruh 3 tokens per second? Some people have entire projects they need to work on with ai for coding and we need 15-20 tokens per second minimum.

That seemed okay for light use cases like writing shorter scripts. It would be painful for sure for large projects. Awaiting for my 3090 and will then re-test.

Sign up or log in to comment