metadata
license: mit
tags:
- transformers.js
- transformers
- semanticsearch
- SemanticFinder
Frontend-only live semantic search with transformers.js
App: SemanticFinder
GitHub: do-me/SemanticFinder
This is the HF data repo for indexed texts, ready-to-import in SemanticFinder. The files contain the original text, text chunks and their embeddings.
Catalogue
filesize | textTitle | textAuthor | textYear | textLanguage | URL | modelName | quantized | splitParam | splitType | characters | chunks | wordsToAvoidAll | wordsToCheckAll | wordsToAvoidAny | wordsToCheckAny | exportDecimals | lines | textNotes | textSourceURL | filename |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
100.96 | Collection of 100 books | Various Authors | 1890 | en | https://do-me.github.io/SemanticFinder/?hf=Collection_of_100_books_dd80b04b | Xenova/bge-small-en-v1.5 | True | 100 | Words | 55705582 | 158957 | 2 | 1085035 | US Public Domain Books (English) | https://huggingface.co/datasets/storytracer/US-PD-Books/tree/main/data | Collection_of_100_books_dd80b04b.json.gz | ||||
4.78 | Das Kapital | Karl Marx | 1867 | de | https://do-me.github.io/SemanticFinder/?hf=Das_Kapital_c1a84fba | Xenova/multilingual-e5-small | True | 80 | Words | 2003807 | 3164 | 5 | 28673 | https://ia601605.us.archive.org/13/items/KarlMarxDasKapitalpdf/KAPITAL1.pdf | Das_Kapital_c1a84fba.json.gz | |||||
2.58 | Divina Commedia | Dante | 1321 | it | https://do-me.github.io/SemanticFinder/?hf=Divina_Commedia_d5a0fa67 | Xenova/multilingual-e5-base | True | 50 | Words | 383782 | 1179 | 5 | 6225 | http://www.letteratura-italiana.com/pdf/divina%20commedia/08%20Inferno%20in%20versione%20italiana.pdf | Divina_Commedia_d5a0fa67.json.gz | |||||
11.92 | Don Quijote | Miguel de Cervantes | 1605 | es | https://do-me.github.io/SemanticFinder/?hf=Don_Quijote_14a0b44 | Xenova/multilingual-e5-base | True | 25 | Words | 1047150 | 7186 | 4 | 12005 | https://parnaseo.uv.es/lemir/revista/revista19/textos/quijote_1.pdf | Don_Quijote_14a0b44.json.gz | |||||
0.06 | Hansel and Gretel | Brothers Grimm | 1812 | en | https://do-me.github.io/SemanticFinder/?hf=Hansel_and_Gretel_4de079eb | TaylorAI/gte-tiny | True | 100 | Chars | 5304 | 55 | 5 | 9 | https://www.grimmstories.com/en/grimm_fairy-tales/hansel_and_gretel | Hansel_and_Gretel_4de079eb.json.gz | |||||
13.52 | Iliad | Homer | -750 | gr | https://do-me.github.io/SemanticFinder/?hf=Iliad_8de5d1ea | Xenova/multilingual-e5-small | True | 20 | Words | 1597139 | 11848 | 5 | 32659 | Including modern interpretation | https://www.stipsi.gr/homer/iliada.pdf | Iliad_8de5d1ea.json.gz | ||||
1.74 | IPCC Report 2023 | IPCC | 2023 | en | https://do-me.github.io/SemanticFinder/?hf=IPCC_Report_2023_2b260928 | Supabase/bge-small-en | True | 200 | Chars | 307811 | 1566 | 5 | 3230 | state of knowledge of climate change | https://report.ipcc.ch/ar6syr/pdf/IPCC_AR6_SYR_LongerReport.pdf | IPCC_Report_2023_2b260928.json.gz | ||||
25.56 | King James Bible | None | en | https://do-me.github.io/SemanticFinder/?hf=King_James_Bible_24f6dc4c | TaylorAI/gte-tiny | True | 200 | Chars | 4556163 | 23056 | 5 | 80496 | https://www.holybooks.com/wp-content/uploads/2010/05/The-Holy-Bible-King-James-Version.pdf | King_James_Bible_24f6dc4c.json.gz | ||||||
11.45 | King James Bible | None | en | https://do-me.github.io/SemanticFinder/?hf=King_James_Bible_6434a78d | TaylorAI/gte-tiny | True | 200 | Chars | 4556163 | 23056 | 2 | 80496 | https://www.holybooks.com/wp-content/uploads/2010/05/The-Holy-Bible-King-James-Version.pdf | King_James_Bible_6434a78d.json.gz | ||||||
39.32 | Les Misérables | Victor Hugo | 1862 | fr | https://do-me.github.io/SemanticFinder/?hf=Les_Misérables_2239df51 | Xenova/multilingual-e5-base | True | 25 | Words | 3236941 | 19463 | 5 | 74491 | All five acts included | https://beq.ebooksgratuits.com/vents/Hugo-miserables-1.pdf | Les_Misérables_2239df51.json.gz | ||||
8.67 | List of the Most Common English Words | Dolph | 2012 | en | https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc | Xenova/bge-small-en-v1.5 | True | \n | Regex | 210518 | 25322 | 2 | 25323 | GitHub Repo | https://raw.githubusercontent.com/dolph/dictionary/master/popular.txt | List_of_the_Most_Common_English_Words_0d1e28dc.json.gz | ||||
15.61 | List of the Most Common English Words | Dolph | 2012 | en | https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_70320cde | Xenova/multilingual-e5-base | True | \n | Regex | 210518 | 25322 | 2 | 25323 | GitHub Repo | https://raw.githubusercontent.com/dolph/dictionary/master/popular.txt | List_of_the_Most_Common_English_Words_70320cde.json.gz | ||||
0.46 | REGULATION (EU) 2023/138 | European Commission | 2022 | en | https://do-me.github.io/SemanticFinder/?hf=REGULATION_(EU)_2023_138_c00e7ff6 | Supabase/bge-small-en | True | 25 | Words | 76809 | 424 | 5 | 1323 | https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32023R0138&qid=1704492501351 | REGULATION_(EU)_2023_138_c00e7ff6.json.gz | |||||
0.07 | Universal Declaration of Human Rights | United Nations | 1948 | en | https://do-me.github.io/SemanticFinder/?hf=Universal_Declaration_of_Human_Rights_0a7da79a | TaylorAI/gte-tiny | True | \nArticle | Regex | 8623 | 63 | 5 | 109 | 30 articles | https://www.un.org/en/about-us/universal-declaration-of-human-rights | Universal_Declaration_of_Human_Rights_0a7da79a.json.gz |
Example
Once loaded in SemanticFinder it takes around 2 seconds to search through the whole bible! Try it out.
- Click on one of the example URLs of your choice.
- Once the index loaded, simply enter something you want to search for and hit "Find". The results will appear almost instantly.
Create SemanticFinder files
- Just use SemanticFinder as usual and run at least one search so that the index is created. This might take a while if your input is large. E.g. indexing the bible with 200 chars results in ~23k embeddings and takes 15-30 mins with a quantized gte-tiny model.
- Add the metadata (so other people can find your index) and export the file. Note that you have the freedom to reduce decimals to reduce file size; usually 3 is more than enough depending on the model. Less than 3 will also do in most cases but if you need highest accuracy go with 5 or more.
- Create a PR here if you want to see it added in the official collection! Just make sure to run
create_meta_data_csv_md.py
once to update the csv/md file. For now, thereadme.md
table here needs to be updated with the meta_data.md manually.
Privacy
- This repo is public and shares documents of public interest or documents in the public domain.
- If you have sensitive documents you can still create the index with SemanticFinder and use it only locally. Either you can load the index from disk each time or you host it in your local network and add the URL in SemanticFinder.
Use cases
Standard use case
Search for most similar words/sentences/paragraphs/pages in any text. Just imagine CTRL+F could find related words and not only the exact same one you used! If you're working on the same text repeatedly you can save the index and reuse it.
Also, there is the option of summarizing the results with generative AI like Qwen models right in your browser or connecting a heavy Llama2 instance with Ollama.
Advanced use cases
- Translate words with multilingual embeddings or see which words out of a given list are most similar to your input word. Using e.g. the index of ~30k English words you can use more than 100 input languages to query! Note that here the expert settings change so that only the first match is displayed.
- English synonym finder, using again the index of ~30k English words but with slightly better (and smaller) English-only embeddings. Same expert settings here.
- The universal index idea, i.e. use the 30k English words index and do not inference for any new words. In this way you can perform instant semantic search on unknown / unseen / not indexed texts! Use this URL and add then copy and paste any text of your choice into the text field. Inferencing any new words is turned off for speed gains.
- A hybrid version of the universal index where you use the 30k English words as start index but then "fill up" with all the additional words the index doesn't know yet. For this option just use this URL where the inferencing is turned on again. This yields best results and might be a good compromise assuming that new texts generally don't have that many new words. Even if it's a couple of hundreds (like in a particular research paper in a niche domain) inferencing is quite fast.