limcheekin commited on
Commit
94e3839
1 Parent(s): 6c3814d

feat: updated for Mistral-7B-OpenOrca-GGUF model

Browse files
Files changed (4) hide show
  1. Dockerfile +1 -1
  2. README.md +5 -5
  3. index.html +6 -16
  4. mistral-7b-instruct.ipynb +0 -0
Dockerfile CHANGED
@@ -15,7 +15,7 @@ RUN pip install -U pip setuptools wheel && \
15
 
16
  # Download model
17
  RUN mkdir model && \
18
- curl -L https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf -o model/gguf-model.bin
19
 
20
  COPY ./start_server.sh ./
21
  COPY ./main.py ./
 
15
 
16
  # Download model
17
  RUN mkdir model && \
18
+ curl -L https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GGUF/resolve/main/mistral-7b-openorca.Q4_K_M.gguf -o model/gguf-model.bin
19
 
20
  COPY ./start_server.sh ./
21
  COPY ./main.py ./
README.md CHANGED
@@ -1,20 +1,20 @@
1
  ---
2
- title: Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)
3
  colorFrom: purple
4
  colorTo: blue
5
  sdk: docker
6
  models:
7
- - mistralai/Mistral-7B-Instruct-v0.1
8
- - TheBloke/Mistral-7B-Instruct-v0.1-GGUF
9
  tags:
10
  - inference api
11
  - openai-api compatible
12
  - llama-cpp-python
13
- - Mistral-7B-Instruct-v0.1-GGUF
14
  - gguf
15
  pinned: false
16
  ---
17
 
18
- # Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)
19
 
20
  Please refer to the [index.html](index.html) for more information.
 
1
  ---
2
+ title: Mistral-7B-OpenOrca-GGUF (Q4_K_M)
3
  colorFrom: purple
4
  colorTo: blue
5
  sdk: docker
6
  models:
7
+ - Open-Orca/Mistral-7B-OpenOrca
8
+ - TheBloke/Mistral-7B-OpenOrca-GGUF
9
  tags:
10
  - inference api
11
  - openai-api compatible
12
  - llama-cpp-python
13
+ - Mistral-7B-OpenOrca-GGUF
14
  - gguf
15
  pinned: false
16
  ---
17
 
18
+ # Mistral-7B-OpenOrca-GGUF (Q4_K_M)
19
 
20
  Please refer to the [index.html](index.html) for more information.
index.html CHANGED
@@ -1,10 +1,10 @@
1
  <!DOCTYPE html>
2
  <html>
3
  <head>
4
- <title>Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)</title>
5
  </head>
6
  <body>
7
- <h1>Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)</h1>
8
  <p>
9
  With the utilization of the
10
  <a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a>
@@ -16,27 +16,17 @@
16
  <ul>
17
  <li>
18
  The API endpoint:
19
- <a href="https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/v1"
20
- >https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/v1</a
21
  >
22
  </li>
23
  <li>
24
  The API doc:
25
- <a href="https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/docs"
26
- >https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/docs</a
27
  >
28
  </li>
29
  </ul>
30
- <p>
31
- Go ahead and try it out the API endpoint yourself with the
32
- <a
33
- href="https://huggingface.co/spaces/limcheekin/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct.ipynb"
34
- target="_blank"
35
- >
36
- mistral-7b-instruct.ipynb</a
37
- >
38
- jupyter notebook.
39
- </p>
40
  <p>
41
  If you find this resource valuable, your support in the form of starring
42
  the space would be greatly appreciated. Your engagement plays a vital role
 
1
  <!DOCTYPE html>
2
  <html>
3
  <head>
4
+ <title>Mistral-7B-OpenOrca-GGUF (Q4_K_M)</title>
5
  </head>
6
  <body>
7
+ <h1>Mistral-7B-OpenOrca-GGUF (Q4_K_M)</h1>
8
  <p>
9
  With the utilization of the
10
  <a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a>
 
16
  <ul>
17
  <li>
18
  The API endpoint:
19
+ <a href="https://limcheekin-mistral-7b-openorca-gguf.hf.space/v1"
20
+ >https://limcheekin-mistral-7b-openorca-gguf.hf.space/v1</a
21
  >
22
  </li>
23
  <li>
24
  The API doc:
25
+ <a href="https://limcheekin-mistral-7b-openorca-gguf.hf.space/docs"
26
+ >https://limcheekin-mistral-7b-openorca-gguf.hf.space/docs</a
27
  >
28
  </li>
29
  </ul>
 
 
 
 
 
 
 
 
 
 
30
  <p>
31
  If you find this resource valuable, your support in the form of starring
32
  the space would be greatly appreciated. Your engagement plays a vital role
mistral-7b-instruct.ipynb DELETED
The diff for this file is too large to render. See raw diff