Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Joseph717171
/
Gemma-2-2b-it-OQ8_0-F32.EF32.IQ4_K_M-8_0-GGUF
like
1
GGUF
Inference Endpoints
imatrix
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
Joseph717171
commited on
Sep 14
Commit
1356e8b
•
1 Parent(s):
04d10b2
Create README.md
Browse files
Files changed (1)
hide
show
README.md
+1
-0
README.md
ADDED
Viewed
@@ -0,0 +1 @@
1
+
Custom GGUF quants of arcee-ai’s [gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it), where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. 🧠🔥🚀