Joseph717171
commited on
Commit
•
46874c4
1
Parent(s):
578b7f0
Update README.md
Browse files
README.md
CHANGED
@@ -1 +1,3 @@
|
|
1 |
-
Custom GGUF quants of Google’s [gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it), where the Output Tensors are quantized to Q8_0 or kept at F32 while the Embeddings are kept at F32. 🧠🔥🚀
|
|
|
|
|
|
1 |
+
Custom GGUF quants of Google’s [gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it), where the Output Tensors are quantized to Q8_0 or kept at F32 while the Embeddings are kept at F32. 🧠🔥🚀
|
2 |
+
|
3 |
+
Notes: Great SMOL LLM for on-device inference. 😋
|