ManthanKulakarni
commited on
Commit
•
f2484f2
1
Parent(s):
c0342ae
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,79 @@
|
|
1 |
---
|
2 |
license: bsd
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: bsd
|
3 |
+
datasets:
|
4 |
+
- ManthanKulakarni/Text2JQL_v2
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
tags:
|
9 |
+
- LLaMa
|
10 |
+
- JQL
|
11 |
+
- Jira
|
12 |
+
- GGML
|
13 |
+
- GGML-q8_0
|
14 |
+
- GPU
|
15 |
+
- CPU
|
16 |
+
- 7B
|
17 |
+
- llama.cpp
|
18 |
+
- text-generation-webui
|
19 |
---
|
20 |
+
|
21 |
+
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
22 |
+
|
23 |
+
## How to run in `llama.cpp`
|
24 |
+
|
25 |
+
|
26 |
+
```
|
27 |
+
./main -t 10 -ngl 32 -m ggml-model-q8_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write JQL(Jira query Language) for give input ### Input: stories assigned to manthan which are created in last 10 days with highest priority and label is set to release ### Response:"
|
28 |
+
```
|
29 |
+
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
30 |
+
|
31 |
+
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
32 |
+
|
33 |
+
Tto have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
34 |
+
|
35 |
+
## How to run in `text-generation-webui`
|
36 |
+
|
37 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
38 |
+
|
39 |
+
## How to run using `LangChain`
|
40 |
+
|
41 |
+
##### Instalation on CPU
|
42 |
+
```
|
43 |
+
pip install llama-cpp-python
|
44 |
+
```
|
45 |
+
##### Instalation on GPU
|
46 |
+
```
|
47 |
+
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
|
48 |
+
```
|
49 |
+
|
50 |
+
```python
|
51 |
+
from langchain.llms import LlamaCpp
|
52 |
+
from langchain import PromptTemplate, LLMChain
|
53 |
+
from langchain.callbacks.manager import CallbackManager
|
54 |
+
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
|
55 |
+
|
56 |
+
n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool.
|
57 |
+
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
|
58 |
+
n_ctx=2048
|
59 |
+
|
60 |
+
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
|
61 |
+
|
62 |
+
# Make sure the model path is correct for your system!
|
63 |
+
llm = LlamaCpp(
|
64 |
+
model_path="./ggml-model-q8_0.bin",
|
65 |
+
n_gpu_layers=n_gpu_layers, n_batch=n_batch,
|
66 |
+
callback_manager=callback_manager,
|
67 |
+
verbose=True,
|
68 |
+
n_ctx=n_ctx
|
69 |
+
)
|
70 |
+
|
71 |
+
llm("""### Instruction:
|
72 |
+
Write JQL(Jira query Language) for give input
|
73 |
+
|
74 |
+
### Input:
|
75 |
+
stories assigned to manthan which are created in last 10 days with highest priority and label is set to release
|
76 |
+
|
77 |
+
### Response:""")
|
78 |
+
```
|
79 |
+
For more information refer [LangChain](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/llamacpp)
|