iandennismiller commited on
Commit
e0ccbf6
1 Parent(s): 5cbc5b3

initial commit of Q6_K with model card

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. Readme.md +22 -0
  3. mistral-v0.1-7b-Q6K.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.gguf filter=lfs diff=lfs merge=lfs -text
Readme.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: https://huggingface.co/advanced-stack/MistralAI-v0.1-GGUF
3
+ inference: false
4
+ license: cc-by-nc-4.0
5
+ model_creator: mistral.ai
6
+ model_name: Mistral v0.1
7
+ model_type: llama
8
+ prompt_template: '{prompt}'
9
+ quantized_by: iandennismiller
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - mistral
13
+ ---
14
+
15
+ # Mistral 7b
16
+
17
+ From https://mistral.ai
18
+
19
+ > Mistral-7B-v0.1 is a small, yet powerful model adaptable to many use-cases. Mistral 7B is better than Llama 2 13B on all benchmarks, has natural coding abilities, and 8k sequence length. It’s released under Apache 2.0 licence. We made it easy to deploy on any cloud, and of course on your gaming GPU
20
+
21
+ More info: https://mistral.ai/news/announcing-mistral-7b/
22
+
mistral-v0.1-7b-Q6K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ca5b83164ff5ea4f8b13102ebf5d87dfd776365fc14b285ac8374469f196632
3
+ size 5942064608