Qwen2.5-Coder-1.5B-Instruct for speculative decoding?

#1
by Handgun1773 - opened

The title says it: iirc exl2 support speculative decoding, we have a smol qwen coder, nobody exl2ed it.
If you have the time, it would be very cool of you to do it.
Base models would also be very nice for code completion.

Ok so I did the base to run through tabbyapi and use for completion with continue.dev:
https://huggingface.co/Handgun1773/Qwen2.5-Coder-1.5B-BASE-8.0bpw-exl2
https://huggingface.co/Handgun1773/Qwen2.5-Coder-7B-BASE-8.0bpw-exl2

Speculative decoding doesn't seem to give any noticeable boost, so I'm just using base with my inline_model_loading: true + litellm config, so I can use instruct models and code completion.

Sign up or log in to comment