text
stringlengths 5
59
|
---|
('RN50', 'openai') |
('RN50', 'yfcc15m') |
('RN50', 'cc12m') |
('RN50-quickgelu', 'openai') |
('RN50-quickgelu', 'yfcc15m') |
('RN50-quickgelu', 'cc12m') |
('RN101', 'openai') |
('RN101', 'yfcc15m') |
('RN101-quickgelu', 'openai') |
('RN101-quickgelu', 'yfcc15m') |
('RN50x4', 'openai') |
('RN50x16', 'openai') |
('RN50x64', 'openai') |
('ViT-B-32', 'openai') |
('ViT-B-32', 'laion400m_e31') |
('ViT-B-32', 'laion400m_e32') |
('ViT-B-32', 'laion2b_e16') |
('ViT-B-32', 'laion2b_s34b_b79k') |
('ViT-B-32', 'datacomp_xl_s13b_b90k') |
('ViT-B-32', 'datacomp_m_s128m_b4k') |
('ViT-B-32', 'commonpool_m_clip_s128m_b4k') |
('ViT-B-32', 'commonpool_m_laion_s128m_b4k') |
('ViT-B-32', 'commonpool_m_image_s128m_b4k') |
('ViT-B-32', 'commonpool_m_text_s128m_b4k') |
('ViT-B-32', 'commonpool_m_basic_s128m_b4k') |
('ViT-B-32', 'commonpool_m_s128m_b4k') |
('ViT-B-32', 'datacomp_s_s13m_b4k') |
('ViT-B-32', 'commonpool_s_clip_s13m_b4k') |
('ViT-B-32', 'commonpool_s_laion_s13m_b4k') |
('ViT-B-32', 'commonpool_s_image_s13m_b4k') |
('ViT-B-32', 'commonpool_s_text_s13m_b4k') |
('ViT-B-32', 'commonpool_s_basic_s13m_b4k') |
('ViT-B-32', 'commonpool_s_s13m_b4k') |
('ViT-B-32-256', 'datacomp_s34b_b86k') |
('ViT-B-32-quickgelu', 'openai') |
('ViT-B-32-quickgelu', 'laion400m_e31') |
('ViT-B-32-quickgelu', 'laion400m_e32') |
('ViT-B-32-quickgelu', 'metaclip_400m') |
('ViT-B-32-quickgelu', 'metaclip_fullcc') |
('ViT-B-16', 'openai') |
('ViT-B-16', 'laion400m_e31') |
('ViT-B-16', 'laion400m_e32') |
('ViT-B-16', 'laion2b_s34b_b88k') |
('ViT-B-16', 'datacomp_xl_s13b_b90k') |
('ViT-B-16', 'datacomp_l_s1b_b8k') |
('ViT-B-16', 'commonpool_l_clip_s1b_b8k') |
('ViT-B-16', 'commonpool_l_laion_s1b_b8k') |
('ViT-B-16', 'commonpool_l_image_s1b_b8k') |
('ViT-B-16', 'commonpool_l_text_s1b_b8k') |
('ViT-B-16', 'commonpool_l_basic_s1b_b8k') |
('ViT-B-16', 'commonpool_l_s1b_b8k') |
('ViT-B-16', 'dfn2b') |
('ViT-B-16-quickgelu', 'metaclip_400m') |
('ViT-B-16-quickgelu', 'metaclip_fullcc') |
('ViT-B-16-plus-240', 'laion400m_e31') |
('ViT-B-16-plus-240', 'laion400m_e32') |
('ViT-L-14', 'openai') |
('ViT-L-14', 'laion400m_e31') |
('ViT-L-14', 'laion400m_e32') |
('ViT-L-14', 'laion2b_s32b_b82k') |
('ViT-L-14', 'datacomp_xl_s13b_b90k') |
('ViT-L-14', 'commonpool_xl_clip_s13b_b90k') |
('ViT-L-14', 'commonpool_xl_laion_s13b_b90k') |
('ViT-L-14', 'commonpool_xl_s13b_b90k') |
('ViT-L-14-quickgelu', 'metaclip_400m') |
('ViT-L-14-quickgelu', 'metaclip_fullcc') |
('ViT-L-14-quickgelu', 'dfn2b') |
('ViT-L-14-336', 'openai') |
('ViT-H-14', 'laion2b_s32b_b79k') |
('ViT-H-14-quickgelu', 'metaclip_fullcc') |
('ViT-H-14-quickgelu', 'dfn5b') |
('ViT-H-14-378-quickgelu', 'dfn5b') |
('ViT-g-14', 'laion2b_s12b_b42k') |
('ViT-g-14', 'laion2b_s34b_b88k') |
('ViT-bigG-14', 'laion2b_s39b_b160k') |
('roberta-ViT-B-32', 'laion2b_s12b_b32k') |
('xlm-roberta-base-ViT-B-32', 'laion5b_s13b_b90k') |
('xlm-roberta-large-ViT-H-14', 'frozen_laion5b_s13b_b90k') |
('convnext_base', 'laion400m_s13b_b51k') |
('convnext_base_w', 'laion2b_s13b_b82k') |
('convnext_base_w', 'laion2b_s13b_b82k_augreg') |
('convnext_base_w', 'laion_aesthetic_s13b_b82k') |
('convnext_base_w_320', 'laion_aesthetic_s13b_b82k') |
('convnext_base_w_320', 'laion_aesthetic_s13b_b82k_augreg') |
('convnext_large_d', 'laion2b_s26b_b102k_augreg') |
('convnext_large_d_320', 'laion2b_s29b_b131k_ft') |
('convnext_large_d_320', 'laion2b_s29b_b131k_ft_soup') |
('convnext_xxlarge', 'laion2b_s34b_b82k_augreg') |
('convnext_xxlarge', 'laion2b_s34b_b82k_augreg_rewind') |
('convnext_xxlarge', 'laion2b_s34b_b82k_augreg_soup') |
('coca_ViT-B-32', 'laion2b_s13b_b90k') |
('coca_ViT-B-32', 'mscoco_finetuned_laion2b_s13b_b90k') |
('coca_ViT-L-14', 'laion2b_s13b_b90k') |
('coca_ViT-L-14', 'mscoco_finetuned_laion2b_s13b_b90k') |
('EVA01-g-14', 'laion400m_s11b_b41k') |
('EVA01-g-14-plus', 'merged2b_s11b_b114k') |
('EVA02-B-16', 'merged2b_s8b_b131k') |
('EVA02-L-14', 'merged2b_s4b_b131k') |
('EVA02-L-14-336', 'merged2b_s6b_b61k') |
('EVA02-E-14', 'laion2b_s4b_b115k') |
tokenspace directory
This directory contains utilities for the purpose of browsing the "token space" of CLIP ViT-L/14
Primary tools are:
- "calculate-distances.py": allows command-line browsing of words and their neighbours
- "graph-embeddings.py": plots graph of full values of two embeddings
(clipmodel,cliptextmodel)-calculate-distances.py
Loads the generated embeddings, reads in a word, calculates "distance" to every embedding, and then shows the closest "neighbours".
To run this requires the files "embeddings.safetensors" and "dictionary", in matching format
You will need to rename or copy appropriate files for this as mentioned below.
Note that SD models use cliptextmodel, NOT clipmodel
graph-textmodels.py
Shows the difference between the same word, embedded by CLIPTextModel vs CLIPModel
graph-embeddings.py
Run the script. It will ask you for two text strings. Once you enter both, it will plot the graph and display it for you
Note that this tool does not require any of the other files; just that you have the requisite python modules installed. (pip install -r requirements.txt)
embeddings.safetensors
You can either copy one of the provided files, or generate your own. See generate-embeddings.py for that.
Note that you muist always use the "dictionary" file that matchnes your embeddings file
embeddings.allids.safetensors
DO NOT USE THIS ONE for programs that expect a matching dictionary. This one is purely numeric based. Its intention is more for research datamining, but it does have a matching graph front end, graph-byid.py
dictionary
Make sure to always use the dictionary file that matches your embeddings file.
The "dictionary.fullword" file is pulled from fullword.json, which is distilled from "full words" present in the ViT-L/14 CLIP model's provided token dictionary, called "vocab.json". Thus there are only around 30,000 words in it
If you want to use the provided "embeddings.safetensors.huge" file, you will want to use the matching "dictionary.huge" file, which has over 300,000 words
This huge file comes from the linux "wamerican-huge" package, which delivers it under /usr/share/dict/american-english-huge
There also exists a "american-insane" package
generate-embeddings.py
Generates the "embeddings.safetensor" file, based on the "dictionary" file present. Takes a few minutes to run, depending on size of the dictionary
The shape of the embeddings tensor, is [number-of-words][768]
Note that yes, it is possible to directly pull a tensor from the CLIP model, using keyname of text_model.embeddings.token_embedding.weight
This will NOT GIVE YOU THE RIGHT DISTANCES! Hence why we are calculating and then storing the embedding weights actually generated by the CLIP process
fullword.json
This file contains a collection of "one word, one CLIP token id" pairings. The file was taken from vocab.json, which is part of multiple SD models in huggingface.co
The file was optimized for what people are actually going to type as words. First all the non-(/w) entries were stripped out. Then all the garbage punctuation and foreign characters were stripped out. Finally, the actual (/w) was stripped out, for ease of use.
- Downloads last month
- 9,259