Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
DisOOM
/
Qwen1.5-22B-Chat-Merge-GGUF
like
1
Text Generation
Transformers
GGUF
PyTorch
qwen2
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
conversational
Inference Endpoints
text-generation-inference
License:
tongyi-qianwen
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
Qwen1.5-22B-Chat-Merge-GGUF
1 contributor
History:
14 commits
DisOOM
Update README.md
3d8a43e
verified
8 months ago
.gitattributes
1.78 kB
Rename ggml-model-Q8_0.gguf to Qwen1.5-22B-Chat-Merge-Q8_0.gguf
8 months ago
Qwen1.5-22B-Chat-Merge-Q4_0.gguf
12.6 GB
LFS
Upload Qwen1.5-22B-Chat-Merge-Q4_0.gguf
8 months ago
Qwen1.5-22B-Chat-Merge-Q5_0.gguf
15.3 GB
LFS
Upload Qwen1.5-22B-Chat-Merge-Q5_0.gguf
8 months ago
Qwen1.5-22B-Chat-Merge-Q8_0.gguf
23.4 GB
LFS
Rename ggml-model-Q8_0.gguf to Qwen1.5-22B-Chat-Merge-Q8_0.gguf
8 months ago
README.md
517 Bytes
Update README.md
8 months ago
config.json
29 Bytes
Upload config.json
8 months ago