Spaces:
Runtime error
Runtime error
metadata
title: m4-dialogue
emoji: 🐨
colorFrom: red
colorTo: indigo
sdk: gradio
sdk_version: 3.38.0
app_file: app_dialogue.py
pinned: false
M4 Visualization
For visualizations, we have a main app which calls multiple child apps to retrieve generations via Gradio API. This allows us to parallelize calls to multiple models at the same time instead of running them sequentially.
How to?
The process of adding a model to the main space:
- Use
huggingface-cli login
to login with an auth token that has a read/write access to theHuggingFaceM4
org on the hub. - Use
./upload_checkpoint_to_hub_gcs.sh
script to upload a checkpoint from GCP store to the hub. An example command to upload checkpoint for step 3000 fromtr_121ter
to the hub:./m4/visualization/upload_checkpoint_to_hub_gcs.sh gs://hf-science-m4-cold/local_experiment_dir/tr_121ter/opt_step-3000
. This will create model repo under theHuggingFaceM4
repo on the hub. If you are on the cluster, use./upload_checkpoint_to_hub_s3.sh
instead. I recommend being on a compute node to avoid disk space issues (uploading to the hub consists in downloading locally the checkpoint, creating a repo on the hub, copying it locally, filling it with the weights and commiting the weights to the hub repo). - [MANUAL] Go to the hub, create a repo of type
space
with the same name as the model. In the space's settings, add a secretHF_AUTH_TOKEN
with a token which has read access to theHuggingFaceM4
repo. This step can be potentially automated in the future. - [MANUAL] Edit
m4/visualization/app_dialogue.py
's three dictionary to include your model in the existing formats of those dictionaries. - Run
m4/visualization/sync-repo.sh <name_of_the_space_on_the_hub>
to sync the repo with the local setting. This will automatically update the space to have the latest code as in them4/visualization/app_dialogue.py
. - Run
m4/visualization/sync-repo.sh main
to update the main repo as well with the new model.