Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Joseph717171
/
Gemma-2-2b-it-OQ8_0-F32.EF32.IQ4_K_M-8_0-GGUF
like
1
GGUF
Inference Endpoints
imatrix
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Gemma-2-2b-it-OQ8_0-F32.EF32.IQ4_K_M-8_0-GGUF
1 contributor
History:
18 commits
Joseph717171
Delete gemma-2-2B-it-OQ8_0.EF32.IQ8_0.gguf
4baaecb
verified
about 2 months ago
.gitattributes
Safe
1.95 kB
Upload gemma-2-2b-it-OQ8_0.EF32.IQ6_k.gguf with huggingface_hub
2 months ago
README.md
Safe
282 Bytes
Update README.md
2 months ago
gemma-2-2B-it-OF32.EF32.IQ8_0.gguf
Safe
4.52 GB
LFS
Upload gemma-2-2B-it-OF32.EF32.IQ8_0.gguf with huggingface_hub
2 months ago
gemma-2-2b-it-OF32.EF32.IQ4_K_M.gguf
Safe
3.58 GB
LFS
Upload gemma-2-2b-it-OF32.EF32.IQ4_K_M.gguf with huggingface_hub
2 months ago
gemma-2-2b-it-OF32.EF32.IQ6_K.gguf
Safe
4.03 GB
LFS
Rename gemma-2-2b-it-OF32.EF32.IQ6_k.gguf to gemma-2-2b-it-OF32.EF32.IQ6_K.gguf
2 months ago
gemma-2-2b-it-OQ8_0.EF32.IQ4_K_M.gguf
Safe
1.85 GB
LFS
Upload gemma-2-2b-it-OQ8_0.EF32.IQ4_K_M.gguf with huggingface_hub
2 months ago
gemma-2-2b-it-OQ8_0.EF32.IQ6_k.gguf
Safe
2.29 GB
LFS
Upload gemma-2-2b-it-OQ8_0.EF32.IQ6_k.gguf with huggingface_hub
2 months ago
gemma-2-2b-it-OQ8_0.EF32.IQ8_0.gguf
Safe
2.78 GB
LFS
Upload gemma-2-2b-it-OQ8_0.EF32.IQ8_0.gguf with huggingface_hub
2 months ago