Update README.md
Browse filesadd cpu version readme
README.md
CHANGED
@@ -40,6 +40,7 @@ print(response)
|
|
40 |
```
|
41 |
|
42 |
### AgentLMs as service
|
|
|
43 |
We recommend using [vLLM](https://github.com/vllm-project/vllm) and [FastChat](https://github.com/lm-sys/FastChat) to deploy the model inference service. First, you need to install the corresponding packages (for detailed usage, please refer to the documentation of the two projects):
|
44 |
```bash
|
45 |
pip install vllm
|
@@ -67,6 +68,16 @@ curl http://localhost:8888/v1/chat/completions \
|
|
67 |
-d '{"model": "kagentlms_qwen_7b_mat", "messages": [{"role": "user", "content": "Who is Andy Lau"}]}'
|
68 |
```
|
69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
### Citation
|
71 |
```
|
72 |
@article{pan2023kwaiagents,
|
|
|
40 |
```
|
41 |
|
42 |
### AgentLMs as service
|
43 |
+
#### Serving by [vLLM](https://github.com/vllm-project/vllm) (GPU)
|
44 |
We recommend using [vLLM](https://github.com/vllm-project/vllm) and [FastChat](https://github.com/lm-sys/FastChat) to deploy the model inference service. First, you need to install the corresponding packages (for detailed usage, please refer to the documentation of the two projects):
|
45 |
```bash
|
46 |
pip install vllm
|
|
|
68 |
-d '{"model": "kagentlms_qwen_7b_mat", "messages": [{"role": "user", "content": "Who is Andy Lau"}]}'
|
69 |
```
|
70 |
|
71 |
+
|
72 |
+
#### Serving by [Lamma.cpp](https://github.com/ggerganov/llama.cpp) (CPU)
|
73 |
+
llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc). The converted model can be found in [kwaikeg/kagentlms_qwen_7b_mat_gguf](https://huggingface.co/kwaikeg/kagentlms_qwen_7b_mat_gguf).
|
74 |
+
|
75 |
+
To install the server package and get started:
|
76 |
+
```bash
|
77 |
+
pip install "llama-cpp-python[server]"
|
78 |
+
python3 -m llama_cpp.server --model kagentlms_qwen_7b_mat_gguf/ggml-model-q4_0.gguf --chat_format chatml --port 8888
|
79 |
+
```
|
80 |
+
|
81 |
### Citation
|
82 |
```
|
83 |
@article{pan2023kwaiagents,
|