|
# tokenspace directory |
|
|
|
This directory contains utilities for the purpose of browsing the |
|
"token space" of CLIP ViT-L/14 |
|
|
|
Primary tool is "generate-distances.py", |
|
which allows command-line browsing of words and their neighbours |
|
|
|
|
|
## generate-distances.py |
|
|
|
Loads the generated embeddings, calculates a full matrix |
|
of distances between all tokens, and then reads in a word, to show neighbours for. |
|
|
|
To run this requires the files "embeddings.safetensors" and "fullword.json" |
|
|
|
|
|
|
|
## generate-embeddings.py |
|
|
|
Generates the "embeddings.safetensor" file. Takes a few minutes to run. |
|
|
|
Basically goes through the fullword.json file, and |
|
generates a standalone embedding for each word. |
|
Shape of the embeddings tensor, is |
|
[number-of-words][768] |
|
|
|
Note that yes, it is possible to directly pull a tensor from the CLIP model, |
|
using keyname of text_model.embeddings.token_embedding.weight |
|
|
|
This will NOT GIVE YOU THE RIGHT DISTANCES! |
|
Hence why we are calculating and then storing the embedding weights actually |
|
generated by the CLIP process |
|
|
|
## embeddings.safetensors |
|
|
|
Data file generated by generate-embeddings.py |
|
|
|
|
|
|
|
## fullword.json |
|
|
|
This file contains a collection of "one word, one CLIP token id" pairings. |
|
The file was taken from vocab.json, which is part of multiple SD models in huggingface.co |
|
|
|
The file was optimized for what people are actually going to type as words. |
|
First all the non-(/w) entries were stripped out. |
|
Then all the garbage punctuation and foreign characters were stripped out. |
|
Finally, the actual (/w) was stripped out, for ease of use. |
|
|
|
|