Spaces:
Sleeping
Sleeping
Readme: Update No Setup (#473)
Browse files* Update No Setup
* Update README.md
README.md
CHANGED
@@ -28,8 +28,9 @@ A chat interface using open source models, eg OpenAssistant or Llama. It is a Sv
|
|
28 |
|
29 |
If you don't want to configure, setup, and launch your own Chat UI yourself, you can use this option as a fast deploy alternative.
|
30 |
|
31 |
-
You can deploy your own customized Chat UI instance with any supported LLM of your choice
|
32 |
-
|
|
|
33 |
|
34 |
Read the full tutorial [here](https://huggingface.co/docs/hub/spaces-sdks-docker-chatui#chatui-on-spaces).
|
35 |
|
|
|
28 |
|
29 |
If you don't want to configure, setup, and launch your own Chat UI yourself, you can use this option as a fast deploy alternative.
|
30 |
|
31 |
+
You can deploy your own customized Chat UI instance with any supported [LLM](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending) of your choice on [Hugging Face Spaces](https://huggingface.co/spaces). To do so, use the chat-ui template [available here](https://huggingface.co/new-space?template=huggingchat/chat-ui-template).
|
32 |
+
|
33 |
+
Set `HUGGING_FACE_HUB_TOKEN` in [Space secrets](https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables) to deploy a model with gated access or a model in a private repository. It's also compatible with [Inference for PROs](https://huggingface.co/blog/inference-pro) curated list of powerful models with higher rate limits. Make sure to create your personnal token first in your [User Access Tokens settings](https://huggingface.co/settings/tokens).
|
34 |
|
35 |
Read the full tutorial [here](https://huggingface.co/docs/hub/spaces-sdks-docker-chatui#chatui-on-spaces).
|
36 |
|