File size: 1,552 Bytes
0a2fbbb
 
 
 
 
e9623ba
 
 
 
 
 
 
 
 
 
0a2fbbb
 
 
 
 
e9623ba
0a2fbbb
bac0dfe
e9623ba
0a2fbbb
 
 
e9623ba
 
bac0dfe
 
 
e9623ba
bac0dfe
e9623ba
bac0dfe
e9623ba
bac0dfe
 
 
0a2fbbb
 
 
e9623ba
 
 
bac0dfe
0a2fbbb
bac0dfe
e9623ba
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# tokenspace directory

This directory contains utilities for the purpose of browsing the
"token space" of CLIP ViT-L/14

Primary tool is "generate-distances.py", 
which allows command-line browsing of words and their neighbours


## generate-distances.py

Loads the generated embeddings, calculates a full matrix
of distances between all tokens, and then reads in a word, to show neighbours for.

To run this requires the files "embeddings.safetensors" and "fullword.json"



## generate-embeddings.py

Generates the "embeddings.safetensor" file. Takes a few minutes to run.

Basically goes through the fullword.json file, and 
generates a standalone embedding for each word.
Shape of the embeddings tensor, is
 [number-of-words][768]

Note that yes, it is possible to directly pull a tensor from the CLIP model,
using keyname of  text_model.embeddings.token_embedding.weight

This will NOT GIVE YOU THE RIGHT DISTANCES!
Hence why we are calculating and then storing the embedding weights actually
generated by the CLIP process

## embeddings.safetensors

Data file generated by generate-embeddings.py



## fullword.json

This file contains a collection of "one word, one CLIP token id" pairings.
The file was taken from vocab.json, which is part of multiple SD models in huggingface.co

The file was optimized for what people are actually going to type as words.
First all the non-(/w) entries were stripped out.
Then all the garbage punctuation and foreign characters were stripped out.
Finally, the actual (/w) was stripped out, for ease of use.