add a readme section about theming
Browse files
README.md
CHANGED
@@ -74,11 +74,30 @@ OPENID_CLIENT_SECRET=<your OIDC client secret>
|
|
74 |
|
75 |
These variables will enable the openID sign-in modal for users.
|
76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
### Custom models
|
78 |
|
79 |
You can customize the parameters passed to the model or even use a new model by updating the `MODELS` variable in your `.env.local`. The default one can be found in `.env` and looks like this :
|
80 |
|
81 |
```
|
|
|
82 |
MODELS=`[
|
83 |
{
|
84 |
"name": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
|
@@ -111,6 +130,7 @@ MODELS=`[
|
|
111 |
}
|
112 |
}
|
113 |
]`
|
|
|
114 |
```
|
115 |
|
116 |
You can change things like the parameters, or customize the preprompt to better suit your needs. You can also add more models by adding more objects to the array, with different preprompts for example.
|
@@ -120,10 +140,12 @@ You can change things like the parameters, or customize the preprompt to better
|
|
120 |
If you want to, you can even run your own models locally, by having a look at our endpoint project, [text-generation-inference](https://github.com/huggingface/text-generation-inference). You can then add your own endpoints to the `MODELS` variable in `.env.local`, by adding an `"endpoints"` key for each model in `MODELS`.
|
121 |
|
122 |
```
|
|
|
123 |
{
|
124 |
-
|
125 |
-
|
126 |
}
|
|
|
127 |
```
|
128 |
|
129 |
If `endpoints` is left unspecified, ChatUI will look for the model on the hosted Hugging Face inference API using the model name.
|
@@ -143,12 +165,14 @@ For `Bearer` you can use a token, which can be grabbed from [here](https://huggi
|
|
143 |
You can then add the generated information and the `authorization` parameter to your `.env.local`.
|
144 |
|
145 |
```
|
|
|
146 |
"endpoints": [
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
]
|
|
|
152 |
```
|
153 |
|
154 |
### Models hosted on multiple custom endpoints
|
@@ -156,17 +180,19 @@ You can then add the generated information and the `authorization` parameter to
|
|
156 |
If the model being hosted will be available on multiple servers/instances add the `weight` parameter to your `.env.local`. The `weight` will be used to determine the probability of requesting a particular endpoint.
|
157 |
|
158 |
```
|
|
|
159 |
"endpoints": [
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
]
|
|
|
170 |
```
|
171 |
|
172 |
## Deploying to a HF Space
|
|
|
74 |
|
75 |
These variables will enable the openID sign-in modal for users.
|
76 |
|
77 |
+
### Theming
|
78 |
+
|
79 |
+
You can use a few environment variables to customize the look and feel of chat-ui. These are by default:
|
80 |
+
|
81 |
+
```
|
82 |
+
PUBLIC_APP_NAME=ChatUI
|
83 |
+
PUBLIC_APP_ASSETS=chatui
|
84 |
+
PUBLIC_APP_COLOR=blue
|
85 |
+
PUBLIC_APP_DATA_SHARING=
|
86 |
+
PUBLIC_APP_DISCLAIMER=
|
87 |
+
```
|
88 |
+
|
89 |
+
- `PUBLIC_APP_NAME` The name used as a title throughout the app.
|
90 |
+
- `PUBLIC_APP_ASSETS` Is used to find logos & favicons in `static/$PUBLIC_APP_ASSETS`, current options are `chatui` and `huggingchat`.
|
91 |
+
- `PUBLIC_APP_COLOR` Can be any of the [tailwind colors](https://tailwindcss.com/docs/customizing-colors#default-color-palette).
|
92 |
+
- `PUBLIC_APP_DATA_SHARING` Can be set to 1 to add a toggle in the user settings that lets your users opt-in to data sharing with models creator.
|
93 |
+
- `PUBLIC_APP_DISCLAIMER` If set to 1, we show a disclaimer about generated outputs on login.
|
94 |
+
|
95 |
### Custom models
|
96 |
|
97 |
You can customize the parameters passed to the model or even use a new model by updating the `MODELS` variable in your `.env.local`. The default one can be found in `.env` and looks like this :
|
98 |
|
99 |
```
|
100 |
+
|
101 |
MODELS=`[
|
102 |
{
|
103 |
"name": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
|
|
|
130 |
}
|
131 |
}
|
132 |
]`
|
133 |
+
|
134 |
```
|
135 |
|
136 |
You can change things like the parameters, or customize the preprompt to better suit your needs. You can also add more models by adding more objects to the array, with different preprompts for example.
|
|
|
140 |
If you want to, you can even run your own models locally, by having a look at our endpoint project, [text-generation-inference](https://github.com/huggingface/text-generation-inference). You can then add your own endpoints to the `MODELS` variable in `.env.local`, by adding an `"endpoints"` key for each model in `MODELS`.
|
141 |
|
142 |
```
|
143 |
+
|
144 |
{
|
145 |
+
// rest of the model config here
|
146 |
+
"endpoints": [{"url": "https://HOST:PORT/generate_stream"}]
|
147 |
}
|
148 |
+
|
149 |
```
|
150 |
|
151 |
If `endpoints` is left unspecified, ChatUI will look for the model on the hosted Hugging Face inference API using the model name.
|
|
|
165 |
You can then add the generated information and the `authorization` parameter to your `.env.local`.
|
166 |
|
167 |
```
|
168 |
+
|
169 |
"endpoints": [
|
170 |
+
{
|
171 |
+
"url": "https://HOST:PORT/generate_stream",
|
172 |
+
"authorization": "Basic VVNFUjpQQVNT",
|
173 |
+
}
|
174 |
]
|
175 |
+
|
176 |
```
|
177 |
|
178 |
### Models hosted on multiple custom endpoints
|
|
|
180 |
If the model being hosted will be available on multiple servers/instances add the `weight` parameter to your `.env.local`. The `weight` will be used to determine the probability of requesting a particular endpoint.
|
181 |
|
182 |
```
|
183 |
+
|
184 |
"endpoints": [
|
185 |
+
{
|
186 |
+
"url": "https://HOST:PORT/generate_stream",
|
187 |
+
"weight": 1
|
188 |
+
}
|
189 |
+
{
|
190 |
+
"url": "https://HOST:PORT/generate_stream",
|
191 |
+
"weight": 2
|
192 |
+
}
|
193 |
+
...
|
194 |
]
|
195 |
+
|
196 |
```
|
197 |
|
198 |
## Deploying to a HF Space
|