nsarrazin HF staff Mishig victor HF staff commited on
Commit
0e5c445
1 Parent(s): 6cfb775

Add support for tgi multimodal models (#531)

Browse files

* wip: add support for tgi multimodal models

* wip work on passing images to prompt

* working idefics config!

* rm allowed conv feature

* lint

* Add image resizing

* fix ssr

* add upload button

* add delete button

* misc formatting

* lint

* server file size check

* optimistic update of images

* retry with images

* fix websearch button

* lint

* better error handling & max one image at a time

* replace test image by blank one

* disable loading on page change

* Fix sharing of images

* fix comments

* Update filedropzone (#544)

* Update src/lib/buildPrompt.ts

Co-authored-by: Mishig <[email protected]>

* small tweaks

* Fix merge conflicts

* lint

* wildcard image mime type

* fix lint and comment

* added comments

* added comment about file size

* Readme update

---------

Co-authored-by: Mishig <[email protected]>
Co-authored-by: Victor Mustar <[email protected]>

.env.template CHANGED
@@ -111,7 +111,7 @@ MODELS=`[
111
  },
112
  "promptExamples": [
113
  {
114
- "title": "Write an email from bullet list",
115
  "prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
116
  }, {
117
  "title": "Code a snake game",
 
111
  },
112
  "promptExamples": [
113
  {
114
+ "title": "Write an email from bullet list",
115
  "prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
116
  }, {
117
  "title": "Code a snake game",
PROMPTS.md CHANGED
@@ -31,3 +31,9 @@ System: {{preprompt}}\nUser:{{#each messages}}{{#ifUser}}{{content}}\nFalcon:{{/
31
  ```env
32
  <|system|>\n{{preprompt}}</s>\n{{#each messages}}{{#ifUser}}<|user|>\n{{content}}</s>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}</s>\n{{/ifAssistant}}{{/each}}
33
  ```
 
 
 
 
 
 
 
31
  ```env
32
  <|system|>\n{{preprompt}}</s>\n{{#each messages}}{{#ifUser}}<|user|>\n{{content}}</s>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}</s>\n{{/ifAssistant}}{{/each}}
33
  ```
34
+
35
+ ## IDEFICS
36
+
37
+ ```env
38
+ {{#each messages}}{{#ifUser}}User: {{content}}{{/ifUser}}<end_of_utterance>\nAssistant: {{#ifAssistant}}{{content}}\n{{/ifAssistant}}{{/each}}
39
+ ```
README.md CHANGED
@@ -168,7 +168,65 @@ MODELS=`[
168
 
169
  You can change things like the parameters, or customize the preprompt to better suit your needs. You can also add more models by adding more objects to the array, with different preprompts for example.
170
 
171
- #### OpenAI API compatible models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
172
 
173
  Chat UI can be used with any API server that supports OpenAI API compatibility, for example [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai), [LocalAI](https://github.com/go-skynet/LocalAI), [FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), and [ialacol](https://github.com/chenhunghan/ialacol).
174
 
@@ -217,7 +275,7 @@ MODELS=`[{
217
  }]`
218
  ```
219
 
220
- #### Llama.cpp API server
221
 
222
  chat-ui also supports the llama.cpp API server directly without the need for an adapter. You can do this using the `llamacpp` endpoint type.
223
 
@@ -253,70 +311,29 @@ MODELS=[
253
 
254
  Start chat-ui with `npm run dev` and you should be able to chat with Zephyr locally.
255
 
256
- #### Custom prompt templates
257
-
258
- By default, the prompt is constructed using `userMessageToken`, `assistantMessageToken`, `userMessageEndToken`, `assistantMessageEndToken`, `preprompt` parameters and a series of default templates.
259
-
260
- However, these templates can be modified by setting the `chatPromptTemplate` and `webSearchQueryPromptTemplate` parameters. Note that if WebSearch is not enabled, only `chatPromptTemplate` needs to be set. The template language is <https://handlebarsjs.com>. The templates have access to the model's prompt parameters (`preprompt`, etc.). However, if the templates are specified it is recommended to inline the prompt parameters, as using the references (`{{preprompt}}`) is deprecated.
261
-
262
- For example:
263
-
264
- ```prompt
265
- <System>You are an AI, called ChatAI.</System>
266
- {{#each messages}}
267
- {{#ifUser}}<User>{{content}}</User>{{/ifUser}}
268
- {{#ifAssistant}}<Assistant>{{content}}</Assistant>{{/ifAssistant}}
269
- {{/each}}
270
- <Assistant>
271
- ```
272
-
273
- ##### chatPromptTemplate
274
-
275
- When querying the model for a chat response, the `chatPromptTemplate` template is used. `messages` is an array of chat messages, it has the format `[{ content: string }, ...]`. To identify if a message is a user message or an assistant message the `ifUser` and `ifAssistant` block helpers can be used.
276
 
277
- The following is the default `chatPromptTemplate`, although newlines and indentiation have been added for readability. You can find the prompts used in production for HuggingChat [here](https://github.com/huggingface/chat-ui/blob/main/PROMPTS.md).
278
-
279
- ```prompt
280
- {{preprompt}}
281
- {{#each messages}}
282
- {{#ifUser}}{{@root.userMessageToken}}{{content}}{{@root.userMessageEndToken}}{{/ifUser}}
283
- {{#ifAssistant}}{{@root.assistantMessageToken}}{{content}}{{@root.assistantMessageEndToken}}{{/ifAssistant}}
284
- {{/each}}
285
- {{assistantMessageToken}}
286
- ```
287
-
288
- ##### webSearchQueryPromptTemplate
289
-
290
- When performing a websearch, the search query is constructed using the `webSearchQueryPromptTemplate` template. It is recommended that the prompt instructs the chat model to only return a few keywords.
291
-
292
- The following is the default `webSearchQueryPromptTemplate`.
293
-
294
- ```prompt
295
- {{userMessageToken}}
296
- My question is: {{message.content}}.
297
 
298
- Based on the conversation history (my previous questions are: {{previousMessages}}), give me an appropriate query to answer my question for web search. You should not say more than query. You should not say any words except the query. For the context, today is {{currentDate}}
 
 
 
 
 
 
 
 
 
299
 
300
- {{userMessageEndToken}}
301
- {{assistantMessageToken}}
 
302
  ```
303
 
304
- #### Running your own models using a custom endpoint
305
-
306
- If you want to, instead of hitting models on the Hugging Face Inference API, you can run your own models locally.
307
-
308
- A good option is to hit a [text-generation-inference](https://github.com/huggingface/text-generation-inference) endpoint. This is what is done in the official [Chat UI Spaces Docker template](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) for instance: both this app and a text-generation-inference server run inside the same container.
309
-
310
- To do this, you can add your own endpoints to the `MODELS` variable in `.env.local`, by adding an `"endpoints"` key for each model in `MODELS`.
311
 
312
- ```env
313
- {
314
- // rest of the model config here
315
- "endpoints": [{"url": "https://HOST:PORT"}]
316
- }
317
- ```
318
-
319
- If `endpoints` are left unspecified, ChatUI will look for the model on the hosted Hugging Face inference API using the model name.
320
 
321
  ### Custom endpoint authorization
322
 
@@ -343,55 +360,6 @@ You can then add the generated information and the `authorization` parameter to
343
  ]
344
  ```
345
 
346
- ### Amazon
347
-
348
- #### SageMaker
349
-
350
- You can also specify your Amazon SageMaker instance as an endpoint for chat-ui. The config goes like this:
351
-
352
- ```env
353
- "endpoints": [
354
- {
355
- "type" : "aws",
356
- "service" : "sagemaker"
357
- "url": "",
358
- "accessKey": "",
359
- "secretKey" : "",
360
- "sessionToken": "",
361
- "weight": 1
362
- }
363
- ]
364
- ```
365
-
366
- #### Lambda
367
-
368
- You can also specify your Amazon Lambda instance as an endpoint for chat-ui. The config goes like this:
369
-
370
- ```env
371
- "endpoints" : [
372
- {
373
- "type": "aws",
374
- "service": "lambda",
375
- "url": "",
376
- "accessKey": "",
377
- "secretKey": "",
378
- "sessionToken": "",
379
- "region": "",
380
- "weight": 1
381
- }
382
- ]
383
- ```
384
-
385
- You can get the `accessKey` and `secretKey` from your AWS user, under programmatic access.
386
-
387
- #### Client Certificate Authentication (mTLS)
388
-
389
- Custom endpoints may require client certificate authentication, depending on how you configure them. To enable mTLS between Chat UI and your custom endpoint, you will need to set the `USE_CLIENT_CERTIFICATE` to `true`, and add the `CERT_PATH` and `KEY_PATH` parameters to your `.env.local`. These parameters should point to the location of the certificate and key files on your local machine. The certificate and key files should be in PEM format. The key file can be encrypted with a passphrase, in which case you will also need to add the `CLIENT_KEY_PASSWORD` parameter to your `.env.local`.
390
-
391
- If you're using a certificate signed by a private CA, you will also need to add the `CA_PATH` parameter to your `.env.local`. This parameter should point to the location of the CA certificate file on your local machine.
392
-
393
- If you're using a self-signed certificate, e.g. for testing or development purposes, you can set the `REJECT_UNAUTHORIZED` parameter to `false` in your `.env.local`. This will disable certificate validation, and allow Chat UI to connect to your custom endpoint.
394
-
395
  #### Models hosted on multiple custom endpoints
396
 
397
  If the model being hosted will be available on multiple servers/instances add the `weight` parameter to your `.env.local`. The `weight` will be used to determine the probability of requesting a particular endpoint.
@@ -408,9 +376,16 @@ If the model being hosted will be available on multiple servers/instances add th
408
  }
409
  ...
410
  ]
411
-
412
  ```
413
 
 
 
 
 
 
 
 
 
414
  ## Deploying to a HF Space
415
 
416
  Create a `DOTENV_LOCAL` secret to your HF space with the content of your .env.local, and they will be picked up automatically when you run.
 
168
 
169
  You can change things like the parameters, or customize the preprompt to better suit your needs. You can also add more models by adding more objects to the array, with different preprompts for example.
170
 
171
+ #### chatPromptTemplate
172
+
173
+ When querying the model for a chat response, the `chatPromptTemplate` template is used. `messages` is an array of chat messages, it has the format `[{ content: string }, ...]`. To identify if a message is a user message or an assistant message the `ifUser` and `ifAssistant` block helpers can be used.
174
+
175
+ The following is the default `chatPromptTemplate`, although newlines and indentiation have been added for readability. You can find the prompts used in production for HuggingChat [here](https://github.com/huggingface/chat-ui/blob/main/PROMPTS.md).
176
+
177
+ ```prompt
178
+ {{preprompt}}
179
+ {{#each messages}}
180
+ {{#ifUser}}{{@root.userMessageToken}}{{content}}{{@root.userMessageEndToken}}{{/ifUser}}
181
+ {{#ifAssistant}}{{@root.assistantMessageToken}}{{content}}{{@root.assistantMessageEndToken}}{{/ifAssistant}}
182
+ {{/each}}
183
+ {{assistantMessageToken}}
184
+ ```
185
+
186
+ #### Multi modal model
187
+
188
+ We currently only support IDEFICS as a multimodal model, hosted on TGI. You can enable it by using the followin config (if you have a PRO HF Api token):
189
+
190
+ ```env
191
+ {
192
+ "name": "HuggingFaceM4/idefics-80b-instruct",
193
+ "multimodal" : true,
194
+ "description": "IDEFICS is the new multimodal model by Hugging Face.",
195
+ "preprompt": "",
196
+ "chatPromptTemplate" : "{{#each messages}}{{#ifUser}}User: {{content}}{{/ifUser}}<end_of_utterance>\nAssistant: {{#ifAssistant}}{{content}}\n{{/ifAssistant}}{{/each}}",
197
+ "parameters": {
198
+ "temperature": 0.1,
199
+ "top_p": 0.95,
200
+ "repetition_penalty": 1.2,
201
+ "top_k": 12,
202
+ "truncate": 1000,
203
+ "max_new_tokens": 1024,
204
+ "stop": ["<end_of_utterance>", "User:", "\nUser:"]
205
+ }
206
+ }
207
+ ```
208
+
209
+ #### Running your own models using a custom endpoint
210
+
211
+ If you want to, instead of hitting models on the Hugging Face Inference API, you can run your own models locally.
212
+
213
+ A good option is to hit a [text-generation-inference](https://github.com/huggingface/text-generation-inference) endpoint. This is what is done in the official [Chat UI Spaces Docker template](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) for instance: both this app and a text-generation-inference server run inside the same container.
214
+
215
+ To do this, you can add your own endpoints to the `MODELS` variable in `.env.local`, by adding an `"endpoints"` key for each model in `MODELS`.
216
+
217
+ ```env
218
+ {
219
+ // rest of the model config here
220
+ "endpoints": [{
221
+ "type" : "tgi",
222
+ "url": "https://HOST:PORT",
223
+ }]
224
+ }
225
+ ```
226
+
227
+ If `endpoints` are left unspecified, ChatUI will look for the model on the hosted Hugging Face inference API using the model name.
228
+
229
+ ##### OpenAI API compatible models
230
 
231
  Chat UI can be used with any API server that supports OpenAI API compatibility, for example [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai), [LocalAI](https://github.com/go-skynet/LocalAI), [FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), and [ialacol](https://github.com/chenhunghan/ialacol).
232
 
 
275
  }]`
276
  ```
277
 
278
+ ##### Llama.cpp API server
279
 
280
  chat-ui also supports the llama.cpp API server directly without the need for an adapter. You can do this using the `llamacpp` endpoint type.
281
 
 
311
 
312
  Start chat-ui with `npm run dev` and you should be able to chat with Zephyr locally.
313
 
314
+ #### Amazon
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
315
 
316
+ You can also specify your Amazon SageMaker instance as an endpoint for chat-ui. The config goes like this:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
317
 
318
+ ```env
319
+ "endpoints": [
320
+ {
321
+ "type" : "aws",
322
+ "service" : "sagemaker"
323
+ "url": "",
324
+ "accessKey": "",
325
+ "secretKey" : "",
326
+ "sessionToken": "",
327
+ "region": "",
328
 
329
+ "weight": 1
330
+ }
331
+ ]
332
  ```
333
 
334
+ You can also set `"service" : "lambda"` to use a lambda instance.
 
 
 
 
 
 
335
 
336
+ You can get the `accessKey` and `secretKey` from your AWS user, under programmatic access.
 
 
 
 
 
 
 
337
 
338
  ### Custom endpoint authorization
339
 
 
360
  ]
361
  ```
362
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
363
  #### Models hosted on multiple custom endpoints
364
 
365
  If the model being hosted will be available on multiple servers/instances add the `weight` parameter to your `.env.local`. The `weight` will be used to determine the probability of requesting a particular endpoint.
 
376
  }
377
  ...
378
  ]
 
379
  ```
380
 
381
+ #### Client Certificate Authentication (mTLS)
382
+
383
+ Custom endpoints may require client certificate authentication, depending on how you configure them. To enable mTLS between Chat UI and your custom endpoint, you will need to set the `USE_CLIENT_CERTIFICATE` to `true`, and add the `CERT_PATH` and `KEY_PATH` parameters to your `.env.local`. These parameters should point to the location of the certificate and key files on your local machine. The certificate and key files should be in PEM format. The key file can be encrypted with a passphrase, in which case you will also need to add the `CLIENT_KEY_PASSWORD` parameter to your `.env.local`.
384
+
385
+ If you're using a certificate signed by a private CA, you will also need to add the `CA_PATH` parameter to your `.env.local`. This parameter should point to the location of the CA certificate file on your local machine.
386
+
387
+ If you're using a self-signed certificate, e.g. for testing or development purposes, you can set the `REJECT_UNAUTHORIZED` parameter to `false` in your `.env.local`. This will disable certificate validation, and allow Chat UI to connect to your custom endpoint.
388
+
389
  ## Deploying to a HF Space
390
 
391
  Create a `DOTENV_LOCAL` secret to your HF space with the content of your .env.local, and they will be picked up automatically when you run.
package-lock.json CHANGED
@@ -12,10 +12,12 @@
12
  "@huggingface/inference": "^2.6.3",
13
  "@xenova/transformers": "^2.6.0",
14
  "autoprefixer": "^10.4.14",
 
15
  "date-fns": "^2.29.3",
16
  "dotenv": "^16.0.3",
17
  "handlebars": "^4.7.8",
18
  "highlight.js": "^11.7.0",
 
19
  "jsdom": "^22.0.0",
20
  "marked": "^4.3.0",
21
  "mongodb": "^5.8.0",
@@ -1796,6 +1798,11 @@
1796
  "base64-js": "^1.1.2"
1797
  }
1798
  },
 
 
 
 
 
1799
  "node_modules/browserslist": {
1800
  "version": "4.21.5",
1801
  "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.21.5.tgz",
@@ -3266,6 +3273,20 @@
3266
  "node": ">= 4"
3267
  }
3268
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3269
  "node_modules/import-fresh": {
3270
  "version": "3.3.0",
3271
  "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.3.0.tgz",
@@ -4986,6 +5007,14 @@
4986
  "resolved": "https://registry.npmjs.org/querystringify/-/querystringify-2.2.0.tgz",
4987
  "integrity": "sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ=="
4988
  },
 
 
 
 
 
 
 
 
4989
  "node_modules/queue-microtask": {
4990
  "version": "1.2.3",
4991
  "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
 
12
  "@huggingface/inference": "^2.6.3",
13
  "@xenova/transformers": "^2.6.0",
14
  "autoprefixer": "^10.4.14",
15
+ "browser-image-resizer": "^2.4.1",
16
  "date-fns": "^2.29.3",
17
  "dotenv": "^16.0.3",
18
  "handlebars": "^4.7.8",
19
  "highlight.js": "^11.7.0",
20
+ "image-size": "^1.0.2",
21
  "jsdom": "^22.0.0",
22
  "marked": "^4.3.0",
23
  "mongodb": "^5.8.0",
 
1798
  "base64-js": "^1.1.2"
1799
  }
1800
  },
1801
+ "node_modules/browser-image-resizer": {
1802
+ "version": "2.4.1",
1803
+ "resolved": "https://registry.npmjs.org/browser-image-resizer/-/browser-image-resizer-2.4.1.tgz",
1804
+ "integrity": "sha512-gqrmr7+NTI9FgZVVyw/GIqwJE3MhNWaBn1R5ptu75r+/M5ncyntSMQYuYhOPonm44qQNnkGN9cnghlpd9h1Hug=="
1805
+ },
1806
  "node_modules/browserslist": {
1807
  "version": "4.21.5",
1808
  "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.21.5.tgz",
 
3273
  "node": ">= 4"
3274
  }
3275
  },
3276
+ "node_modules/image-size": {
3277
+ "version": "1.0.2",
3278
+ "resolved": "https://registry.npmjs.org/image-size/-/image-size-1.0.2.tgz",
3279
+ "integrity": "sha512-xfOoWjceHntRb3qFCrh5ZFORYH8XCdYpASltMhZ/Q0KZiOwjdE/Yl2QCiWdwD+lygV5bMCvauzgu5PxBX/Yerg==",
3280
+ "dependencies": {
3281
+ "queue": "6.0.2"
3282
+ },
3283
+ "bin": {
3284
+ "image-size": "bin/image-size.js"
3285
+ },
3286
+ "engines": {
3287
+ "node": ">=14.0.0"
3288
+ }
3289
+ },
3290
  "node_modules/import-fresh": {
3291
  "version": "3.3.0",
3292
  "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.3.0.tgz",
 
5007
  "resolved": "https://registry.npmjs.org/querystringify/-/querystringify-2.2.0.tgz",
5008
  "integrity": "sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ=="
5009
  },
5010
+ "node_modules/queue": {
5011
+ "version": "6.0.2",
5012
+ "resolved": "https://registry.npmjs.org/queue/-/queue-6.0.2.tgz",
5013
+ "integrity": "sha512-iHZWu+q3IdFZFX36ro/lKBkSvfkztY5Y7HMiPlOUjhupPcG2JMfst2KKEpu5XndviX/3UhFbRngUPNKtgvtZiA==",
5014
+ "dependencies": {
5015
+ "inherits": "~2.0.3"
5016
+ }
5017
+ },
5018
  "node_modules/queue-microtask": {
5019
  "version": "1.2.3",
5020
  "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
package.json CHANGED
@@ -48,10 +48,12 @@
48
  "@huggingface/inference": "^2.6.3",
49
  "@xenova/transformers": "^2.6.0",
50
  "autoprefixer": "^10.4.14",
 
51
  "date-fns": "^2.29.3",
52
  "dotenv": "^16.0.3",
53
  "handlebars": "^4.7.8",
54
  "highlight.js": "^11.7.0",
 
55
  "jsdom": "^22.0.0",
56
  "marked": "^4.3.0",
57
  "mongodb": "^5.8.0",
 
48
  "@huggingface/inference": "^2.6.3",
49
  "@xenova/transformers": "^2.6.0",
50
  "autoprefixer": "^10.4.14",
51
+ "browser-image-resizer": "^2.4.1",
52
  "date-fns": "^2.29.3",
53
  "dotenv": "^16.0.3",
54
  "handlebars": "^4.7.8",
55
  "highlight.js": "^11.7.0",
56
+ "image-size": "^1.0.2",
57
  "jsdom": "^22.0.0",
58
  "marked": "^4.3.0",
59
  "mongodb": "^5.8.0",
src/lib/buildPrompt.ts CHANGED
@@ -2,18 +2,17 @@ import type { BackendModel } from "./server/models";
2
  import type { Message } from "./types/Message";
3
  import { format } from "date-fns";
4
  import type { WebSearch } from "./types/WebSearch";
5
- /**
6
- * Convert [{user: "assistant", content: "hi"}, {user: "user", content: "hello"}] to:
7
- *
8
- * <|assistant|>hi<|endoftext|><|prompter|>hello<|endoftext|><|assistant|>
9
- */
10
 
11
  interface buildPromptOptions {
12
- messages: Pick<Message, "from" | "content">[];
 
13
  model: BackendModel;
14
  locals?: App.Locals;
15
  webSearch?: WebSearch;
16
  preprompt?: string;
 
17
  }
18
 
19
  export async function buildPrompt({
@@ -21,6 +20,7 @@ export async function buildPrompt({
21
  model,
22
  webSearch,
23
  preprompt,
 
24
  }: buildPromptOptions): Promise<string> {
25
  if (webSearch && webSearch.context) {
26
  const lastMsg = messages.slice(-1)[0];
@@ -49,6 +49,38 @@ export async function buildPrompt({
49
  ];
50
  }
51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  return (
53
  model
54
  .chatPromptRender({ messages, preprompt })
 
2
  import type { Message } from "./types/Message";
3
  import { format } from "date-fns";
4
  import type { WebSearch } from "./types/WebSearch";
5
+ import { downloadFile } from "./server/files/downloadFile";
6
+ import type { Conversation } from "./types/Conversation";
 
 
 
7
 
8
  interface buildPromptOptions {
9
+ messages: Pick<Message, "from" | "content" | "files">[];
10
+ id?: Conversation["_id"];
11
  model: BackendModel;
12
  locals?: App.Locals;
13
  webSearch?: WebSearch;
14
  preprompt?: string;
15
+ files?: File[];
16
  }
17
 
18
  export async function buildPrompt({
 
20
  model,
21
  webSearch,
22
  preprompt,
23
+ id,
24
  }: buildPromptOptions): Promise<string> {
25
  if (webSearch && webSearch.context) {
26
  const lastMsg = messages.slice(-1)[0];
 
49
  ];
50
  }
51
 
52
+ // section to handle potential files input
53
+ if (model.multimodal) {
54
+ messages = await Promise.all(
55
+ messages.map(async (el) => {
56
+ let content = el.content;
57
+
58
+ if (el.from === "user") {
59
+ if (el?.files && el.files.length > 0 && id) {
60
+ const markdowns = await Promise.all(
61
+ el.files.map(async (hash) => {
62
+ try {
63
+ const { content: image, mime } = await downloadFile(hash, id);
64
+ const b64 = image.toString("base64");
65
+ return `![](data:${mime};base64,${b64})})`;
66
+ } catch (e) {
67
+ console.error(e);
68
+ }
69
+ })
70
+ );
71
+ content += markdowns.join("\n ");
72
+ } else {
73
+ // if no image, append an empty white image
74
+ content +=
75
+ "\n![](data:image/png;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAQABADAREAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD+/igAoAKACgD/2Q==)";
76
+ }
77
+ }
78
+
79
+ return { ...el, content };
80
+ })
81
+ );
82
+ }
83
+
84
  return (
85
  model
86
  .chatPromptRender({ messages, preprompt })
src/lib/components/UploadBtn.svelte ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ import CarbonUpload from "~icons/carbon/upload";
3
+
4
+ export let classNames = "";
5
+ export let files: File[];
6
+ let filelist: FileList;
7
+
8
+ $: if (filelist) {
9
+ files = Array.from(filelist);
10
+ }
11
+ </script>
12
+
13
+ <button
14
+ class="btn relative h-8 rounded-lg border bg-white px-3 py-1 text-sm text-gray-500 shadow-sm transition-all hover:bg-gray-100 dark:border-gray-600 dark:bg-gray-700 dark:text-gray-300 dark:hover:bg-gray-600 {classNames}"
15
+ >
16
+ <input
17
+ bind:files={filelist}
18
+ class="absolute w-full cursor-pointer opacity-0"
19
+ type="file"
20
+ accept="image/*"
21
+ />
22
+ <CarbonUpload class="mr-2 text-xs " /> Upload image
23
+ </button>
src/lib/components/chat/ChatInput.svelte CHANGED
@@ -6,7 +6,6 @@
6
  export let maxRows: null | number = null;
7
  export let placeholder = "";
8
  export let disabled = false;
9
-
10
  // Approximate width from which we disable autofocus
11
  const TABLET_VIEWPORT_WIDTH = 768;
12
 
 
6
  export let maxRows: null | number = null;
7
  export let placeholder = "";
8
  export let disabled = false;
 
9
  // Approximate width from which we disable autofocus
10
  const TABLET_VIEWPORT_WIDTH = 768;
11
 
src/lib/components/chat/ChatMessage.svelte CHANGED
@@ -234,36 +234,59 @@
234
  {/if}
235
  {#if message.from === "user"}
236
  <div class="group relative flex items-start justify-start gap-4 max-sm:text-sm">
237
- <div class="mt-5 h-3 w-3 flex-none rounded-full" />
238
- <div
239
- class="max-w-full whitespace-break-spaces break-words rounded-2xl px-5 py-3.5 text-gray-500 dark:text-gray-400"
240
- >
241
- {message.content.trim()}
242
- </div>
243
- {#if !loading}
244
- <div class="absolute right-0 top-3.5 flex gap-2 lg:-right-2">
245
- {#if downloadLink}
246
- <a
247
- class="rounded-lg border border-gray-100 p-1 text-xs text-gray-400 group-hover:block hover:text-gray-500 dark:border-gray-800 dark:text-gray-400 dark:hover:text-gray-300 md:hidden"
248
- title="Download prompt and parameters"
249
- type="button"
250
- target="_blank"
251
- href={downloadLink}
252
- >
253
- <CarbonDownload />
254
- </a>
255
- {/if}
256
- {#if !readOnly}
257
- <button
258
- class="cursor-pointer rounded-lg border border-gray-100 p-1 text-xs text-gray-400 group-hover:block hover:text-gray-500 dark:border-gray-800 dark:text-gray-400 dark:hover:text-gray-300 md:hidden lg:-right-2"
259
- title="Retry"
260
- type="button"
261
- on:click={() => dispatch("retry", { content: message.content, id: message.id })}
262
- >
263
- <CarbonRotate360 />
264
- </button>
265
- {/if}
266
  </div>
267
- {/if}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
268
  </div>
269
  {/if}
 
234
  {/if}
235
  {#if message.from === "user"}
236
  <div class="group relative flex items-start justify-start gap-4 max-sm:text-sm">
237
+ <div class="flex flex-col">
238
+ {#if message.files && message.files.length > 0}
239
+ <div class="mx-auto grid w-fit grid-cols-2 gap-5 px-5">
240
+ {#each message.files as file}
241
+ <!-- handle the case where this is a hash that points to an image in the db, hash is always 64 char long -->
242
+ {#if file.length === 64}
243
+ <img
244
+ src={$page.url.pathname + "/output/" + file}
245
+ alt="input from user"
246
+ class="my-2 aspect-auto max-h-48 rounded-lg shadow-lg"
247
+ />
248
+ {:else}
249
+ <!-- handle the case where this is a base64 encoded image -->
250
+ <img
251
+ src={"data:image/*;base64," + file}
252
+ alt="input from user"
253
+ class="my-2 aspect-auto max-h-48 rounded-lg shadow-lg"
254
+ />
255
+ {/if}
256
+ {/each}
257
+ </div>
258
+ {/if}
259
+
260
+ <div
261
+ class="max-w-full whitespace-break-spaces break-words rounded-2xl px-5 py-3.5 text-gray-500 dark:text-gray-400"
262
+ >
263
+ {message.content.trim()}
 
 
264
  </div>
265
+ {#if !loading}
266
+ <div class="absolute right-0 top-3.5 flex gap-2 lg:-right-2">
267
+ {#if downloadLink}
268
+ <a
269
+ class="rounded-lg border border-gray-100 p-1 text-xs text-gray-400 group-hover:block hover:text-gray-500 dark:border-gray-800 dark:text-gray-400 dark:hover:text-gray-300 md:hidden"
270
+ title="Download prompt and parameters"
271
+ type="button"
272
+ target="_blank"
273
+ href={downloadLink}
274
+ >
275
+ <CarbonDownload />
276
+ </a>
277
+ {/if}
278
+ {#if !readOnly}
279
+ <button
280
+ class="cursor-pointer rounded-lg border border-gray-100 p-1 text-xs text-gray-400 group-hover:block hover:text-gray-500 dark:border-gray-800 dark:text-gray-400 dark:hover:text-gray-300 md:hidden lg:-right-2"
281
+ title="Retry"
282
+ type="button"
283
+ on:click={() => dispatch("retry", { content: message.content, id: message.id })}
284
+ >
285
+ <CarbonRotate360 />
286
+ </button>
287
+ {/if}
288
+ </div>
289
+ {/if}
290
+ </div>
291
  </div>
292
  {/if}
src/lib/components/chat/ChatWindow.svelte CHANGED
@@ -5,6 +5,8 @@
5
  import CarbonSendAltFilled from "~icons/carbon/send-alt-filled";
6
  import CarbonExport from "~icons/carbon/export";
7
  import CarbonStopFilledAlt from "~icons/carbon/stop-filled-alt";
 
 
8
  import EosIconsLoading from "~icons/eos-icons/loading";
9
 
10
  import ChatMessages from "./ChatMessages.svelte";
@@ -17,7 +19,10 @@
17
  import type { WebSearchUpdate } from "$lib/types/MessageUpdate";
18
  import { page } from "$app/stores";
19
  import DisclaimerModal from "../DisclaimerModal.svelte";
 
20
  import RetryBtn from "../RetryBtn.svelte";
 
 
21
 
22
  export let messages: Message[] = [];
23
  export let loading = false;
@@ -28,6 +33,7 @@
28
  export let settings: LayoutData["settings"];
29
  export let webSearchMessages: WebSearchUpdate[] = [];
30
  export let preprompt: string | undefined = undefined;
 
31
 
32
  $: isReadOnly = !models.some((model) => model.id === currentModel.id);
33
 
@@ -47,7 +53,25 @@
47
  message = "";
48
  };
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  $: lastIsError = messages[messages.length - 1]?.from === "user" && !loading;
 
 
51
  </script>
52
 
53
  <div class="relative min-h-0 min-w-0">
@@ -84,94 +108,134 @@
84
  if (!loading) dispatch("retry", ev.detail);
85
  }}
86
  />
 
87
  <div
88
- class="dark:via-gray-80 pointer-events-none absolute inset-x-0 bottom-0 z-0 mx-auto flex w-full max-w-3xl flex-col items-center justify-center bg-gradient-to-t from-white via-white/80 to-white/0 px-3.5 py-4 dark:border-gray-800 dark:from-gray-900 dark:to-gray-900/0 max-md:border-t max-md:bg-white max-md:dark:bg-gray-900 sm:px-5 md:py-8 xl:max-w-4xl [&>*]:pointer-events-auto"
89
  >
90
- <div class="flex w-full pb-3">
91
- {#if settings?.searchEnabled}
92
- <WebSearchToggle />
93
- {/if}
94
- {#if loading}
95
- <StopGeneratingBtn classNames="ml-auto" on:click={() => dispatch("stop")} />
96
- {/if}
97
- {#if lastIsError}
98
- <RetryBtn
99
- classNames="ml-auto"
100
- on:click={() =>
101
- dispatch("retry", {
102
- id: messages[messages.length - 1].id,
103
- content: messages[messages.length - 1].content,
104
- })}
105
- />
106
- {/if}
 
 
 
 
107
  </div>
108
- <form
109
- on:submit|preventDefault={handleSubmit}
110
- class="relative flex w-full max-w-4xl flex-1 items-center rounded-xl border bg-gray-100 focus-within:border-gray-300 dark:border-gray-600 dark:bg-gray-700 dark:focus-within:border-gray-500
111
- {isReadOnly ? 'opacity-30' : ''}"
112
  >
113
- <div class="flex w-full flex-1 border-none bg-transparent">
114
- {#if lastIsError}
115
- <ChatInput value="Sorry, something went wrong. Please try again." disabled={true} />
116
- {:else}
117
- <ChatInput
118
- placeholder="Ask anything"
119
- bind:value={message}
120
- on:submit={handleSubmit}
121
- on:keypress={(ev) => {
122
- if ($page.data.loginRequired) {
123
- ev.preventDefault();
124
- loginModalOpen = true;
125
- }
126
- }}
127
- maxRows={4}
128
- disabled={isReadOnly || lastIsError}
129
  />
 
 
130
  {/if}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
 
132
- {#if loading}
133
- <button
134
- class="btn mx-1 my-1 inline-block h-[2.4rem] self-end rounded-lg bg-transparent p-1 px-[0.7rem] text-gray-400 disabled:opacity-60 enabled:hover:text-gray-700 dark:disabled:opacity-40 enabled:dark:hover:text-gray-100 md:hidden"
135
- on:click={() => dispatch("stop")}
136
- >
137
- <CarbonStopFilledAlt />
138
- </button>
139
- <div
140
- class="mx-1 my-1 hidden h-[2.4rem] items-center p-1 px-[0.7rem] text-gray-400 disabled:opacity-60 enabled:hover:text-gray-700 dark:disabled:opacity-40 enabled:dark:hover:text-gray-100 md:flex"
141
- >
142
- <EosIconsLoading />
 
 
 
 
 
 
 
 
 
 
143
  </div>
144
- {:else}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
145
  <button
146
- class="btn mx-1 my-1 h-[2.4rem] self-end rounded-lg bg-transparent p-1 px-[0.7rem] text-gray-400 disabled:opacity-60 enabled:hover:text-gray-700 dark:disabled:opacity-40 enabled:dark:hover:text-gray-100"
147
- disabled={!message || isReadOnly}
148
- type="submit"
149
  >
150
- <CarbonSendAltFilled />
 
151
  </button>
152
  {/if}
153
  </div>
154
- </form>
155
- <div class="mt-2 flex justify-between self-stretch px-1 text-xs text-gray-400/90 max-sm:gap-2">
156
- <p>
157
- Model: <a
158
- href={currentModel.modelUrl || "https://huggingface.co/" + currentModel.name}
159
- target="_blank"
160
- rel="noreferrer"
161
- class="hover:underline">{currentModel.displayName}</a
162
- > <span class="max-sm:hidden">·</span><br class="sm:hidden" /> Generated content may be inaccurate
163
- or false.
164
- </p>
165
- {#if messages.length}
166
- <button
167
- class="flex flex-none items-center hover:text-gray-400 hover:underline max-sm:rounded-lg max-sm:bg-gray-50 max-sm:px-2.5 dark:max-sm:bg-gray-800"
168
- type="button"
169
- on:click={() => dispatch("share")}
170
- >
171
- <CarbonExport class="text-[.6rem] sm:mr-1.5 sm:text-primary-500" />
172
- <div class="max-sm:hidden">Share this conversation</div>
173
- </button>
174
- {/if}
175
  </div>
176
  </div>
177
  </div>
 
5
  import CarbonSendAltFilled from "~icons/carbon/send-alt-filled";
6
  import CarbonExport from "~icons/carbon/export";
7
  import CarbonStopFilledAlt from "~icons/carbon/stop-filled-alt";
8
+ import CarbonClose from "~icons/carbon/close";
9
+
10
  import EosIconsLoading from "~icons/eos-icons/loading";
11
 
12
  import ChatMessages from "./ChatMessages.svelte";
 
19
  import type { WebSearchUpdate } from "$lib/types/MessageUpdate";
20
  import { page } from "$app/stores";
21
  import DisclaimerModal from "../DisclaimerModal.svelte";
22
+ import FileDropzone from "./FileDropzone.svelte";
23
  import RetryBtn from "../RetryBtn.svelte";
24
+ import UploadBtn from "../UploadBtn.svelte";
25
+ import file2base64 from "$lib/utils/file2base64";
26
 
27
  export let messages: Message[] = [];
28
  export let loading = false;
 
33
  export let settings: LayoutData["settings"];
34
  export let webSearchMessages: WebSearchUpdate[] = [];
35
  export let preprompt: string | undefined = undefined;
36
+ export let files: File[] = [];
37
 
38
  $: isReadOnly = !models.some((model) => model.id === currentModel.id);
39
 
 
53
  message = "";
54
  };
55
 
56
+ let lastTarget: EventTarget | null = null;
57
+
58
+ let onDrag = false;
59
+
60
+ const onDragEnter = (e: DragEvent) => {
61
+ lastTarget = e.target;
62
+ onDrag = true;
63
+ };
64
+ const onDragLeave = (e: DragEvent) => {
65
+ if (e.target === lastTarget) {
66
+ onDrag = false;
67
+ }
68
+ };
69
+ const onDragOver = (e: DragEvent) => {
70
+ e.preventDefault();
71
+ };
72
  $: lastIsError = messages[messages.length - 1]?.from === "user" && !loading;
73
+
74
+ $: sources = files.map((file) => file2base64(file));
75
  </script>
76
 
77
  <div class="relative min-h-0 min-w-0">
 
108
  if (!loading) dispatch("retry", ev.detail);
109
  }}
110
  />
111
+
112
  <div
113
+ class="pointer-events-none absolute inset-x-0 bottom-0 z-0 mx-auto flex w-full max-w-3xl flex-col items-center justify-center md:px-5 md:py-8 xl:max-w-4xl [&>*]:pointer-events-auto"
114
  >
115
+ <div class="flex flex-row flex-wrap justify-center gap-2.5 max-md:pb-3">
116
+ {#each sources as source, index}
117
+ {#await source then src}
118
+ <div class="relative h-24 w-24 overflow-hidden rounded-lg shadow-lg">
119
+ <img
120
+ src={`data:image/*;base64,${src}`}
121
+ alt="input content"
122
+ class="h-full w-full rounded-lg bg-gray-400 object-cover dark:bg-gray-900"
123
+ />
124
+ <!-- add a button on top that deletes this image from sources -->
125
+ <button
126
+ class="absolute left-1 top-1"
127
+ on:click={() => {
128
+ files = files.filter((_, i) => i !== index);
129
+ }}
130
+ >
131
+ <CarbonClose class="text-md font-black text-gray-300 hover:text-gray-100" />
132
+ </button>
133
+ </div>
134
+ {/await}
135
+ {/each}
136
  </div>
137
+
138
+ <div
139
+ class="dark:via-gray-80 w-full bg-gradient-to-t from-white via-white/80 to-white/0 dark:border-gray-800 dark:from-gray-900 dark:to-gray-900/0 max-md:border-t max-md:bg-white max-md:px-4 max-md:dark:bg-gray-900"
 
140
  >
141
+ <div class="flex w-full pb-3 max-md:pt-3">
142
+ {#if settings?.searchEnabled}
143
+ <WebSearchToggle />
144
+ {/if}
145
+ {#if loading}
146
+ <StopGeneratingBtn classNames="ml-auto" on:click={() => dispatch("stop")} />
147
+ {:else if lastIsError}
148
+ <RetryBtn
149
+ classNames="ml-auto"
150
+ on:click={() =>
151
+ dispatch("retry", {
152
+ id: messages[messages.length - 1].id,
153
+ content: messages[messages.length - 1].content,
154
+ })}
 
 
155
  />
156
+ {:else if currentModel.multimodal}
157
+ <UploadBtn bind:files classNames="ml-auto" />
158
  {/if}
159
+ </div>
160
+ <form
161
+ on:dragover={onDragOver}
162
+ on:dragenter={onDragEnter}
163
+ on:dragleave={onDragLeave}
164
+ tabindex="-1"
165
+ aria-label="file dropzone"
166
+ on:submit|preventDefault={handleSubmit}
167
+ class="relative flex w-full max-w-4xl flex-1 items-center rounded-xl border bg-gray-100 focus-within:border-gray-300 dark:border-gray-600 dark:bg-gray-700 dark:focus-within:border-gray-500
168
+ {isReadOnly ? 'opacity-30' : ''}"
169
+ >
170
+ {#if onDrag && currentModel.multimodal}
171
+ <FileDropzone bind:files bind:onDrag />
172
+ {:else}
173
+ <div class="flex w-full flex-1 border-none bg-transparent">
174
+ {#if lastIsError}
175
+ <ChatInput value="Sorry, something went wrong. Please try again." disabled={true} />
176
+ {:else}
177
+ <ChatInput
178
+ placeholder="Ask anything"
179
+ bind:value={message}
180
+ on:submit={handleSubmit}
181
+ on:keypress={(ev) => {
182
+ if ($page.data.loginRequired) {
183
+ ev.preventDefault();
184
+ loginModalOpen = true;
185
+ }
186
+ }}
187
+ maxRows={4}
188
+ disabled={isReadOnly || lastIsError}
189
+ />
190
+ {/if}
191
 
192
+ {#if loading}
193
+ <button
194
+ class="btn mx-1 my-1 inline-block h-[2.4rem] self-end rounded-lg bg-transparent p-1 px-[0.7rem] text-gray-400 disabled:opacity-60 enabled:hover:text-gray-700 dark:disabled:opacity-40 enabled:dark:hover:text-gray-100 md:hidden"
195
+ on:click={() => dispatch("stop")}
196
+ >
197
+ <CarbonStopFilledAlt />
198
+ </button>
199
+ <div
200
+ class="mx-1 my-1 hidden h-[2.4rem] items-center p-1 px-[0.7rem] text-gray-400 disabled:opacity-60 enabled:hover:text-gray-700 dark:disabled:opacity-40 enabled:dark:hover:text-gray-100 md:flex"
201
+ >
202
+ <EosIconsLoading />
203
+ </div>
204
+ {:else}
205
+ <button
206
+ class="btn mx-1 my-1 h-[2.4rem] self-end rounded-lg bg-transparent p-1 px-[0.7rem] text-gray-400 disabled:opacity-60 enabled:hover:text-gray-700 dark:disabled:opacity-40 enabled:dark:hover:text-gray-100"
207
+ disabled={!message || isReadOnly}
208
+ type="submit"
209
+ >
210
+ <CarbonSendAltFilled />
211
+ </button>
212
+ {/if}
213
  </div>
214
+ {/if}
215
+ </form>
216
+ <div
217
+ class="mt-2 flex justify-between self-stretch px-1 text-xs text-gray-400/90 max-md:mb-2 max-sm:gap-2"
218
+ >
219
+ <p>
220
+ Model: <a
221
+ href={currentModel.modelUrl || "https://huggingface.co/" + currentModel.name}
222
+ target="_blank"
223
+ rel="noreferrer"
224
+ class="hover:underline">{currentModel.displayName}</a
225
+ > <span class="max-sm:hidden">·</span><br class="sm:hidden" /> Generated content may be inaccurate
226
+ or false.
227
+ </p>
228
+ {#if messages.length}
229
  <button
230
+ class="flex flex-none items-center hover:text-gray-400 hover:underline max-sm:rounded-lg max-sm:bg-gray-50 max-sm:px-2.5 dark:max-sm:bg-gray-800"
231
+ type="button"
232
+ on:click={() => dispatch("share")}
233
  >
234
+ <CarbonExport class="text-[.6rem] sm:mr-1.5 sm:text-primary-500" />
235
+ <div class="max-sm:hidden">Share this conversation</div>
236
  </button>
237
  {/if}
238
  </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
239
  </div>
240
  </div>
241
  </div>
src/lib/components/chat/FileDropzone.svelte ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ import { onDestroy } from "svelte";
3
+ import CarbonImage from "~icons/carbon/image";
4
+ // import EosIconsLoading from "~icons/eos-icons/loading";
5
+
6
+ export let files: File[];
7
+
8
+ let file_error_message = "";
9
+ let errorTimeout: ReturnType<typeof setTimeout>;
10
+
11
+ export let onDrag = false;
12
+
13
+ async function dropHandle(event: DragEvent) {
14
+ event.preventDefault();
15
+ if (event.dataTransfer && event.dataTransfer.items) {
16
+ // Use DataTransferItemList interface to access the file(s)
17
+ if (files.length > 0) {
18
+ files = [];
19
+ }
20
+ // get only the first file
21
+ // optionally: we need to handle multiple files, if we want to support document upload for example
22
+ // for multimodal we only support one image at a time but we could support multiple PDFs
23
+ if (event.dataTransfer.items[0].kind === "file") {
24
+ const file = event.dataTransfer.items[0].getAsFile();
25
+ if (file) {
26
+ if (!event.dataTransfer.items[0].type.startsWith("image")) {
27
+ setErrorMsg("Only images are supported");
28
+ files = [];
29
+ return;
30
+ }
31
+ // if image is bigger than 2MB abort
32
+ if (file.size > 2 * 1024 * 1024) {
33
+ setErrorMsg("Image is too big. (2MB max)");
34
+ files = [];
35
+ return;
36
+ }
37
+ files = [file];
38
+ onDrag = false;
39
+ }
40
+ }
41
+ }
42
+ }
43
+
44
+ function setErrorMsg(errorMsg: string) {
45
+ if (errorTimeout) {
46
+ clearTimeout(errorTimeout);
47
+ }
48
+ file_error_message = errorMsg;
49
+ errorTimeout = setTimeout(() => {
50
+ file_error_message = "";
51
+ onDrag = false;
52
+ }, 2000);
53
+ }
54
+
55
+ onDestroy(() => {
56
+ if (errorTimeout) {
57
+ clearTimeout(errorTimeout);
58
+ }
59
+ });
60
+ </script>
61
+
62
+ <div
63
+ id="dropzone"
64
+ role="form"
65
+ on:drop={dropHandle}
66
+ class="relative flex w-full max-w-4xl flex-col items-center rounded-xl border bg-gray-100 focus-within:border-gray-300 dark:border-gray-600 dark:bg-gray-700 dark:focus-within:border-gray-500"
67
+ >
68
+ <div class="object-center">
69
+ {#if file_error_message}
70
+ <div
71
+ class="absolute bottom-0 left-0 right-0 top-0 flex flex-col items-center justify-center gap-2 rounded-xl bg-gray-100 bg-opacity-50 dark:bg-gray-700 dark:bg-opacity-50"
72
+ >
73
+ <p class="text-red-500 dark:text-red-400">{file_error_message}</p>
74
+ <div class="h-2.5 w-1/2 rounded-full bg-gray-200 dark:bg-gray-700">
75
+ <div
76
+ class="animate-progress-bar h-2.5
77
+ rounded-full bg-red-500
78
+ dark:text-red-400
79
+ "
80
+ />
81
+ </div>
82
+ </div>
83
+ {/if}
84
+ <div class="mt-3 flex justify-center" class:opacity-0={file_error_message}>
85
+ <CarbonImage class="text-5xl text-gray-500 dark:text-gray-400" />
86
+ </div>
87
+ <p
88
+ class="mb-3 mt-3 text-sm text-gray-500 dark:text-gray-400"
89
+ class:opacity-0={file_error_message}
90
+ >
91
+ Drag and drop <span class="font-semibold">one image</span> here
92
+ </p>
93
+ </div>
94
+ </div>
95
+
96
+ <style>
97
+ @keyframes slideInFromLeft {
98
+ 0% {
99
+ width: 0;
100
+ }
101
+ 100% {
102
+ width: 100%;
103
+ }
104
+ }
105
+
106
+ .animate-progress-bar {
107
+ /* This section calls the slideInFromLeft animation we defined above */
108
+ animation: 2s linear 0s 1 slideInFromLeft;
109
+ }
110
+ </style>
src/lib/server/database.ts CHANGED
@@ -1,5 +1,5 @@
1
  import { MONGODB_URL, MONGODB_DB_NAME, MONGODB_DIRECT_CONNECTION } from "$env/static/private";
2
- import { MongoClient } from "mongodb";
3
  import type { Conversation } from "$lib/types/Conversation";
4
  import type { SharedConversation } from "$lib/types/SharedConversation";
5
  import type { WebSearch } from "$lib/types/WebSearch";
@@ -29,6 +29,7 @@ const settings = db.collection<Settings>("settings");
29
  const users = db.collection<User>("users");
30
  const webSearches = db.collection<WebSearch>("webSearches");
31
  const messageEvents = db.collection<MessageEvent>("messageEvents");
 
32
 
33
  export { client, db };
34
  export const collections = {
@@ -39,6 +40,7 @@ export const collections = {
39
  users,
40
  webSearches,
41
  messageEvents,
 
42
  };
43
 
44
  client.on("open", () => {
 
1
  import { MONGODB_URL, MONGODB_DB_NAME, MONGODB_DIRECT_CONNECTION } from "$env/static/private";
2
+ import { GridFSBucket, MongoClient } from "mongodb";
3
  import type { Conversation } from "$lib/types/Conversation";
4
  import type { SharedConversation } from "$lib/types/SharedConversation";
5
  import type { WebSearch } from "$lib/types/WebSearch";
 
29
  const users = db.collection<User>("users");
30
  const webSearches = db.collection<WebSearch>("webSearches");
31
  const messageEvents = db.collection<MessageEvent>("messageEvents");
32
+ const bucket = new GridFSBucket(db, { bucketName: "files" });
33
 
34
  export { client, db };
35
  export const collections = {
 
40
  users,
41
  webSearches,
42
  messageEvents,
43
+ bucket,
44
  };
45
 
46
  client.on("open", () => {
src/lib/server/endpoints/endpoints.ts CHANGED
@@ -11,6 +11,7 @@ interface EndpointParameters {
11
  conversation: {
12
  messages: Omit<Conversation["messages"][0], "id">[];
13
  preprompt?: Conversation["preprompt"];
 
14
  };
15
  }
16
 
 
11
  conversation: {
12
  messages: Omit<Conversation["messages"][0], "id">[];
13
  preprompt?: Conversation["preprompt"];
14
+ _id?: Conversation["_id"];
15
  };
16
  }
17
 
src/lib/server/endpoints/tgi/endpointTgi.ts CHANGED
@@ -23,6 +23,7 @@ export function endpointTgi({
23
  webSearch: conversation.messages[conversation.messages.length - 1].webSearch,
24
  preprompt: conversation.preprompt,
25
  model,
 
26
  });
27
 
28
  return textGenerationStream({
 
23
  webSearch: conversation.messages[conversation.messages.length - 1].webSearch,
24
  preprompt: conversation.preprompt,
25
  model,
26
+ id: conversation._id,
27
  });
28
 
29
  return textGenerationStream({
src/lib/server/files/downloadFile.ts ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import { error } from "@sveltejs/kit";
2
+ import { collections } from "../database";
3
+ import type { Conversation } from "$lib/types/Conversation";
4
+ import type { SharedConversation } from "$lib/types/SharedConversation";
5
+
6
+ export async function downloadFile(
7
+ sha256: string,
8
+ convId: Conversation["_id"] | SharedConversation["_id"]
9
+ ) {
10
+ const fileId = collections.bucket.find({ filename: `${convId.toString()}-${sha256}` });
11
+ let mime = "";
12
+
13
+ const content = await fileId.next().then(async (file) => {
14
+ if (!file) {
15
+ throw error(404, "File not found");
16
+ }
17
+ if (file.metadata?.conversation !== convId.toString()) {
18
+ throw error(403, "You don't have access to this file.");
19
+ }
20
+
21
+ mime = file.metadata?.mime;
22
+
23
+ const fileStream = collections.bucket.openDownloadStream(file._id);
24
+
25
+ const fileBuffer = await new Promise<Buffer>((resolve, reject) => {
26
+ const chunks: Uint8Array[] = [];
27
+ fileStream.on("data", (chunk) => chunks.push(chunk));
28
+ fileStream.on("error", reject);
29
+ fileStream.on("end", () => resolve(Buffer.concat(chunks)));
30
+ });
31
+
32
+ return fileBuffer;
33
+ });
34
+
35
+ return { content, mime };
36
+ }
src/lib/server/files/uploadFile.ts ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import type { Conversation } from "$lib/types/Conversation";
2
+ import { sha256 } from "$lib/utils/sha256";
3
+ import { collections } from "../database";
4
+
5
+ export async function uploadFile(file: Blob, conv: Conversation): Promise<string> {
6
+ const sha = await sha256(await file.text());
7
+
8
+ const upload = collections.bucket.openUploadStream(`${conv._id}-${sha}`, {
9
+ metadata: { conversation: conv._id.toString(), mime: "image/jpeg" },
10
+ });
11
+
12
+ upload.write((await file.arrayBuffer()) as unknown as Buffer);
13
+ upload.end();
14
+
15
+ // only return the filename when upload throws a finish event or a 10s time out occurs
16
+ return new Promise((resolve, reject) => {
17
+ upload.once("finish", () => resolve(sha));
18
+ upload.once("error", reject);
19
+ setTimeout(() => reject(new Error("Upload timed out")), 10000);
20
+ });
21
+ }
src/lib/server/models.ts CHANGED
@@ -57,6 +57,7 @@ const modelConfig = z.object({
57
  })
58
  .passthrough()
59
  .optional(),
 
60
  });
61
 
62
  const modelsRaw = z.array(modelConfig).parse(JSON.parse(MODELS));
@@ -144,4 +145,4 @@ export const smallModel = TASK_MODEL
144
  defaultModel
145
  : defaultModel;
146
 
147
- export type BackendModel = Optional<typeof defaultModel, "preprompt" | "parameters">;
 
57
  })
58
  .passthrough()
59
  .optional(),
60
+ multimodal: z.boolean().default(false),
61
  });
62
 
63
  const modelsRaw = z.array(modelConfig).parse(JSON.parse(MODELS));
 
145
  defaultModel
146
  : defaultModel;
147
 
148
+ export type BackendModel = Optional<typeof defaultModel, "preprompt" | "parameters" | "multimodal">;
src/lib/stores/pendingMessage.ts CHANGED
@@ -1,3 +1,9 @@
1
  import { writable } from "svelte/store";
2
 
3
- export const pendingMessage = writable<string>("");
 
 
 
 
 
 
 
1
  import { writable } from "svelte/store";
2
 
3
+ export const pendingMessage = writable<
4
+ | {
5
+ content: string;
6
+ files: File[];
7
+ }
8
+ | undefined
9
+ >();
src/lib/types/Message.ts CHANGED
@@ -10,4 +10,5 @@ export type Message = Partial<Timestamps> & {
10
  webSearchId?: WebSearch["_id"]; // legacy version
11
  webSearch?: WebSearch;
12
  score?: -1 | 0 | 1;
 
13
  };
 
10
  webSearchId?: WebSearch["_id"]; // legacy version
11
  webSearch?: WebSearch;
12
  score?: -1 | 0 | 1;
13
+ files?: string[]; // can contain either the hash of the file or the b64 encoded image data on the client side when uploading
14
  };
src/lib/types/MessageUpdate.ts CHANGED
@@ -31,9 +31,16 @@ export type StatusUpdate = {
31
  message?: string;
32
  };
33
 
 
 
 
 
 
 
34
  export type MessageUpdate =
35
  | FinalAnswer
36
  | TextStreamUpdate
37
  | AgentUpdate
38
  | WebSearchUpdate
39
- | StatusUpdate;
 
 
31
  message?: string;
32
  };
33
 
34
+ export type ErrorUpdate = {
35
+ type: "error";
36
+ message: string;
37
+ name: string;
38
+ };
39
+
40
  export type MessageUpdate =
41
  | FinalAnswer
42
  | TextStreamUpdate
43
  | AgentUpdate
44
  | WebSearchUpdate
45
+ | StatusUpdate
46
+ | ErrorUpdate;
src/lib/types/Model.ts CHANGED
@@ -13,4 +13,5 @@ export type Model = Pick<
13
  | "modelUrl"
14
  | "datasetUrl"
15
  | "preprompt"
 
16
  >;
 
13
  | "modelUrl"
14
  | "datasetUrl"
15
  | "preprompt"
16
+ | "multimodal"
17
  >;
src/lib/utils/file2base64.ts ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ const file2base64 = (file: File): Promise<string> => {
2
+ return new Promise<string>((resolve, reject) => {
3
+ const reader = new FileReader();
4
+ reader.readAsDataURL(file);
5
+ reader.onload = () => {
6
+ const dataUrl = reader.result as string;
7
+ const base64 = dataUrl.split(",")[1];
8
+ resolve(base64);
9
+ };
10
+ reader.onerror = (error) => reject(error);
11
+ });
12
+ };
13
+
14
+ export default file2base64;
src/lib/utils/models.ts CHANGED
@@ -1,4 +1,4 @@
1
  import type { Model } from "$lib/types/Model";
2
 
3
- export const findCurrentModel = (models: Model[], id?: string) =>
4
  models.find((m) => m.id === id) ?? models[0];
 
1
  import type { Model } from "$lib/types/Model";
2
 
3
+ export const findCurrentModel = (models: Model[], id?: string): Model =>
4
  models.find((m) => m.id === id) ?? models[0];
src/routes/+layout.server.ts CHANGED
@@ -102,6 +102,7 @@ export const load: LayoutServerLoad = async ({ locals, depends, url }) => {
102
  promptExamples: model.promptExamples,
103
  parameters: model.parameters,
104
  preprompt: model.preprompt,
 
105
  })),
106
  oldModels,
107
  user: locals.user && {
 
102
  promptExamples: model.promptExamples,
103
  parameters: model.parameters,
104
  preprompt: model.preprompt,
105
+ multimodal: model.multimodal,
106
  })),
107
  oldModels,
108
  user: locals.user && {
src/routes/+page.svelte CHANGED
@@ -9,6 +9,7 @@
9
 
10
  export let data;
11
  let loading = false;
 
12
 
13
  async function createConversation(message: string) {
14
  try {
@@ -33,7 +34,10 @@
33
  const { conversationId } = await res.json();
34
 
35
  // Ugly hack to use a store as temp storage, feel free to improve ^^
36
- pendingMessage.set(message);
 
 
 
37
 
38
  // invalidateAll to update list of conversations
39
  await goto(`${base}/conversation/${conversationId}`, { invalidateAll: true });
@@ -56,4 +60,5 @@
56
  currentModel={findCurrentModel([...data.models, ...data.oldModels], data.settings.activeModel)}
57
  models={data.models}
58
  settings={data.settings}
 
59
  />
 
9
 
10
  export let data;
11
  let loading = false;
12
+ let files: File[] = [];
13
 
14
  async function createConversation(message: string) {
15
  try {
 
34
  const { conversationId } = await res.json();
35
 
36
  // Ugly hack to use a store as temp storage, feel free to improve ^^
37
+ pendingMessage.set({
38
+ content: message,
39
+ files,
40
+ });
41
 
42
  // invalidateAll to update list of conversations
43
  await goto(`${base}/conversation/${conversationId}`, { invalidateAll: true });
 
60
  currentModel={findCurrentModel([...data.models, ...data.oldModels], data.settings.activeModel)}
61
  models={data.models}
62
  settings={data.settings}
63
+ bind:files
64
  />
src/routes/conversation/[id]/+page.svelte CHANGED
@@ -14,7 +14,7 @@
14
  import type { Message } from "$lib/types/Message";
15
  import type { MessageUpdate, WebSearchUpdate } from "$lib/types/MessageUpdate";
16
  import titleUpdate from "$lib/stores/titleUpdate";
17
-
18
  export let data;
19
 
20
  let messages = data.messages;
@@ -32,6 +32,8 @@
32
  let loading = false;
33
  let pending = false;
34
 
 
 
35
  async function convFromShared() {
36
  try {
37
  loading = true;
@@ -79,14 +81,37 @@
79
  retryMessageIndex = messages.length;
80
  }
81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  // slice up to the point of the retry
83
  messages = [
84
  ...messages.slice(0, retryMessageIndex),
85
- { from: "user", content: message, id: messageId },
 
 
 
 
 
86
  ];
87
 
88
- const responseId = randomUUID();
89
 
 
90
  const response = await fetch(`${base}/conversation/${$page.params.id}`, {
91
  method: "POST",
92
  headers: { "Content-Type": "application/json" },
@@ -96,9 +121,11 @@
96
  response_id: responseId,
97
  is_retry: isRetry,
98
  web_search: $webSearchParameters.useSearch,
 
99
  }),
100
  });
101
 
 
102
  if (!response.body) {
103
  throw new Error("Body not defined");
104
  }
@@ -107,6 +134,7 @@
107
  error.set((await response.json())?.message);
108
  return;
109
  }
 
110
  // eslint-disable-next-line no-undef
111
  const encoder = new TextDecoderStream();
112
  const reader = response?.body?.pipeThrough(encoder).getReader();
@@ -143,6 +171,8 @@
143
  if (update.type === "finalAnswer") {
144
  finalAnswer = update.text;
145
  reader.cancel();
 
 
146
  invalidate(UrlDependency.Conversation);
147
  } else if (update.type === "stream") {
148
  pending = false;
@@ -174,6 +204,9 @@
174
  } else if (update.status === "error") {
175
  $error = update.message ?? "An error has occurred";
176
  }
 
 
 
177
  }
178
  } catch (parseError) {
179
  // in case of parsing error we wait for the next message
@@ -233,8 +266,9 @@
233
  onMount(async () => {
234
  // only used in case of creating new conversations (from the parent POST endpoint)
235
  if ($pendingMessage) {
236
- await writeMessage($pendingMessage);
237
- $pendingMessage = "";
 
238
  }
239
  });
240
 
@@ -264,7 +298,7 @@
264
  }
265
  }
266
 
267
- $: $page.params.id, (isAborted = true);
268
  $: title = data.conversations.find((conv) => conv.id === $page.params.id)?.title ?? data.title;
269
  </script>
270
 
@@ -285,6 +319,7 @@
285
  shared={data.shared}
286
  preprompt={data.preprompt}
287
  bind:webSearchMessages
 
288
  on:message={onMessage}
289
  on:retry={onRetry}
290
  on:vote={(event) => voteMessage(event.detail.score, event.detail.id)}
 
14
  import type { Message } from "$lib/types/Message";
15
  import type { MessageUpdate, WebSearchUpdate } from "$lib/types/MessageUpdate";
16
  import titleUpdate from "$lib/stores/titleUpdate";
17
+ import file2base64 from "$lib/utils/file2base64.js";
18
  export let data;
19
 
20
  let messages = data.messages;
 
32
  let loading = false;
33
  let pending = false;
34
 
35
+ let files: File[] = [];
36
+
37
  async function convFromShared() {
38
  try {
39
  loading = true;
 
81
  retryMessageIndex = messages.length;
82
  }
83
 
84
+ const module = await import("browser-image-resizer");
85
+
86
+ // currently, only IDEFICS is supported by TGI
87
+ // the size of images is hardcoded to 224x224 in TGI
88
+ // this will need to be configurable when support for more models is added
89
+ const resizedImages = await Promise.all(
90
+ files.map(async (file) => {
91
+ return await module
92
+ .readAndCompressImage(file, {
93
+ maxHeight: 224,
94
+ maxWidth: 224,
95
+ quality: 1,
96
+ })
97
+ .then(async (el) => await file2base64(el as File));
98
+ })
99
+ );
100
+
101
  // slice up to the point of the retry
102
  messages = [
103
  ...messages.slice(0, retryMessageIndex),
104
+ {
105
+ from: "user",
106
+ content: message,
107
+ id: messageId,
108
+ files: isRetry ? messages[retryMessageIndex].files : resizedImages,
109
+ },
110
  ];
111
 
112
+ files = [];
113
 
114
+ const responseId = randomUUID();
115
  const response = await fetch(`${base}/conversation/${$page.params.id}`, {
116
  method: "POST",
117
  headers: { "Content-Type": "application/json" },
 
121
  response_id: responseId,
122
  is_retry: isRetry,
123
  web_search: $webSearchParameters.useSearch,
124
+ files: isRetry ? undefined : resizedImages,
125
  }),
126
  });
127
 
128
+ files = [];
129
  if (!response.body) {
130
  throw new Error("Body not defined");
131
  }
 
134
  error.set((await response.json())?.message);
135
  return;
136
  }
137
+
138
  // eslint-disable-next-line no-undef
139
  const encoder = new TextDecoderStream();
140
  const reader = response?.body?.pipeThrough(encoder).getReader();
 
171
  if (update.type === "finalAnswer") {
172
  finalAnswer = update.text;
173
  reader.cancel();
174
+ loading = false;
175
+ pending = false;
176
  invalidate(UrlDependency.Conversation);
177
  } else if (update.type === "stream") {
178
  pending = false;
 
204
  } else if (update.status === "error") {
205
  $error = update.message ?? "An error has occurred";
206
  }
207
+ } else if (update.type === "error") {
208
+ error.set(update.message);
209
+ reader.cancel();
210
  }
211
  } catch (parseError) {
212
  // in case of parsing error we wait for the next message
 
266
  onMount(async () => {
267
  // only used in case of creating new conversations (from the parent POST endpoint)
268
  if ($pendingMessage) {
269
+ files = $pendingMessage.files;
270
+ await writeMessage($pendingMessage.content);
271
+ $pendingMessage = undefined;
272
  }
273
  });
274
 
 
298
  }
299
  }
300
 
301
+ $: $page.params.id, ((isAborted = true), (loading = false));
302
  $: title = data.conversations.find((conv) => conv.id === $page.params.id)?.title ?? data.title;
303
  </script>
304
 
 
319
  shared={data.shared}
320
  preprompt={data.preprompt}
321
  bind:webSearchMessages
322
+ bind:files
323
  on:message={onMessage}
324
  on:retry={onRetry}
325
  on:vote={(event) => voteMessage(event.detail.score, event.detail.id)}
src/routes/conversation/[id]/+server.ts CHANGED
@@ -12,6 +12,8 @@ import { runWebSearch } from "$lib/server/websearch/runWebSearch";
12
  import type { WebSearch } from "$lib/types/WebSearch";
13
  import { abortedGenerations } from "$lib/server/abortedGenerations";
14
  import { summarize } from "$lib/server/summarize";
 
 
15
 
16
  export async function POST({ request, locals, params, getClientAddress }) {
17
  const id = z.string().parse(params.id);
@@ -92,6 +94,7 @@ export async function POST({ request, locals, params, getClientAddress }) {
92
  id: messageId,
93
  is_retry,
94
  web_search: webSearch,
 
95
  } = z
96
  .object({
97
  inputs: z.string().trim().min(1),
@@ -99,9 +102,42 @@ export async function POST({ request, locals, params, getClientAddress }) {
99
  response_id: z.optional(z.string().uuid()),
100
  is_retry: z.optional(z.boolean()),
101
  web_search: z.optional(z.boolean()),
 
102
  })
103
  .parse(json);
104
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
105
  // get the list of messages
106
  // while checking for retries
107
  let messages = (() => {
@@ -113,7 +149,13 @@ export async function POST({ request, locals, params, getClientAddress }) {
113
  }
114
  return [
115
  ...conv.messages.slice(0, retryMessageIdx),
116
- { content: newPrompt, from: "user", id: messageId as Message["id"], updatedAt: new Date() },
 
 
 
 
 
 
117
  ];
118
  } // else append the message at the bottom
119
 
@@ -125,6 +167,7 @@ export async function POST({ request, locals, params, getClientAddress }) {
125
  id: (messageId as Message["id"]) || crypto.randomUUID(),
126
  createdAt: new Date(),
127
  updatedAt: new Date(),
 
128
  },
129
  ];
130
  })() satisfies Message[];
@@ -268,6 +311,8 @@ export async function POST({ request, locals, params, getClientAddress }) {
268
  type: "finalAnswer",
269
  text: messages[messages.length - 1].content,
270
  });
 
 
271
  },
272
  async cancel() {
273
  await collections.conversations.updateOne(
 
12
  import type { WebSearch } from "$lib/types/WebSearch";
13
  import { abortedGenerations } from "$lib/server/abortedGenerations";
14
  import { summarize } from "$lib/server/summarize";
15
+ import { uploadFile } from "$lib/server/files/uploadFile.js";
16
+ import sizeof from "image-size";
17
 
18
  export async function POST({ request, locals, params, getClientAddress }) {
19
  const id = z.string().parse(params.id);
 
94
  id: messageId,
95
  is_retry,
96
  web_search: webSearch,
97
+ files: b64files,
98
  } = z
99
  .object({
100
  inputs: z.string().trim().min(1),
 
102
  response_id: z.optional(z.string().uuid()),
103
  is_retry: z.optional(z.boolean()),
104
  web_search: z.optional(z.boolean()),
105
+ files: z.optional(z.array(z.string())),
106
  })
107
  .parse(json);
108
 
109
+ // files is an array of base64 strings encoding Blob objects
110
+ // we need to convert this array to an array of File objects
111
+
112
+ const files = b64files?.map((file) => {
113
+ const blob = Buffer.from(file, "base64");
114
+ return new File([blob], "image.png");
115
+ });
116
+
117
+ // check sizes
118
+ if (files) {
119
+ const filechecks = await Promise.all(
120
+ files.map(async (file) => {
121
+ const dimensions = sizeof(Buffer.from(await file.arrayBuffer()));
122
+ return (
123
+ file.size > 2 * 1024 * 1024 ||
124
+ (dimensions.width ?? 0) > 224 ||
125
+ (dimensions.height ?? 0) > 224
126
+ );
127
+ })
128
+ );
129
+
130
+ if (filechecks.some((check) => check)) {
131
+ throw error(413, "File too large, should be <2MB and 224x224 max.");
132
+ }
133
+ }
134
+
135
+ let hashes: undefined | string[];
136
+
137
+ if (files) {
138
+ hashes = await Promise.all(files.map(async (file) => await uploadFile(file, conv)));
139
+ }
140
+
141
  // get the list of messages
142
  // while checking for retries
143
  let messages = (() => {
 
149
  }
150
  return [
151
  ...conv.messages.slice(0, retryMessageIdx),
152
+ {
153
+ content: newPrompt,
154
+ from: "user",
155
+ id: messageId as Message["id"],
156
+ updatedAt: new Date(),
157
+ files: conv.messages[retryMessageIdx]?.files,
158
+ },
159
  ];
160
  } // else append the message at the bottom
161
 
 
167
  id: (messageId as Message["id"]) || crypto.randomUUID(),
168
  createdAt: new Date(),
169
  updatedAt: new Date(),
170
+ files: hashes,
171
  },
172
  ];
173
  })() satisfies Message[];
 
311
  type: "finalAnswer",
312
  text: messages[messages.length - 1].content,
313
  });
314
+
315
+ return;
316
  },
317
  async cancel() {
318
  await collections.conversations.updateOne(
src/routes/conversation/[id]/output/[sha256]/+server.ts ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import { authCondition } from "$lib/server/auth";
2
+ import { collections } from "$lib/server/database";
3
+ import { error } from "@sveltejs/kit";
4
+ import { ObjectId } from "mongodb";
5
+ import { z } from "zod";
6
+ import type { RequestHandler } from "./$types";
7
+ import { downloadFile } from "$lib/server/files/downloadFile";
8
+
9
+ export const GET: RequestHandler = async ({ locals, params }) => {
10
+ const sha256 = z.string().parse(params.sha256);
11
+
12
+ const userId = locals.user?._id ?? locals.sessionId;
13
+
14
+ // check user
15
+ if (!userId) {
16
+ throw error(401, "Unauthorized");
17
+ }
18
+
19
+ if (params.id.length !== 7) {
20
+ const convId = new ObjectId(z.string().parse(params.id));
21
+
22
+ // check if the user has access to the conversation
23
+ const conv = await collections.conversations.findOne({
24
+ _id: convId,
25
+ ...authCondition(locals),
26
+ });
27
+
28
+ if (!conv) {
29
+ throw error(404, "Conversation not found");
30
+ }
31
+ } else {
32
+ // check if the user has access to the conversation
33
+ const conv = await collections.sharedConversations.findOne({
34
+ _id: params.id,
35
+ });
36
+
37
+ if (!conv) {
38
+ throw error(404, "Conversation not found");
39
+ }
40
+ }
41
+
42
+ const { content, mime } = await downloadFile(sha256, params.id);
43
+
44
+ return new Response(content, {
45
+ headers: {
46
+ "Content-Type": mime ?? "application/octet-stream",
47
+ },
48
+ });
49
+ };
src/routes/conversation/[id]/share/+server.ts CHANGED
@@ -43,6 +43,23 @@ export async function POST({ params, url, locals }) {
43
 
44
  await collections.sharedConversations.insertOne(shared);
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  return new Response(
47
  JSON.stringify({
48
  url: getShareUrl(url, shared._id),
 
43
 
44
  await collections.sharedConversations.insertOne(shared);
45
 
46
+ // copy files from `${conversation._id}-` to `${shared._id}-`
47
+ const files = await collections.bucket
48
+ .find({ filename: { $regex: `${conversation._id}-` } })
49
+ .toArray();
50
+
51
+ await Promise.all(
52
+ files.map(async (file) => {
53
+ const newFilename = file.filename.replace(`${conversation._id}-`, `${shared._id}-`);
54
+ // copy files from `${conversation._id}-` to `${shared._id}-` by downloading and reuploaidng
55
+ const downloadStream = collections.bucket.openDownloadStream(file._id);
56
+ const uploadStream = collections.bucket.openUploadStream(newFilename, {
57
+ metadata: { ...file.metadata, conversation: shared._id.toString() },
58
+ });
59
+ downloadStream.pipe(uploadStream);
60
+ })
61
+ );
62
+
63
  return new Response(
64
  JSON.stringify({
65
  url: getShareUrl(url, shared._id),