New discussion

Add F16 and BF16 quantization

1
#129 opened 11 days ago by andito

Add Llama 3.1 license

#121 opened 2 months ago by jxtngx

Phi-3.5-MoE-instruct

6
#117 opened 2 months ago by goodasdgood

Arm optimized quants

1
#113 opened 3 months ago by SaisExperiments

Please support this method:

7
#96 opened 5 months ago by ZeroWw

Support Q2 imatrix quants

#95 opened 5 months ago by Dampfinchen

Maybe impose a max model size?

3
#33 opened 7 months ago by pcuenq