Dominik WeckmΓΌller

do-me

AI & ML interests

Making AI more accessible. Working on semantic search, embeddings and Geospatial AI applications. https://geo.rocks

Organizations

do-me's activity

posted an update about 15 hours ago
view post
Post
386
What are your favorite text chunkers/splitters?
Mine are:
- https://github.com/benbrandt/text-splitter (Rust/Python, battle-tested, Wasm version coming soon)
- https://github.com/umarbutler/semchunk (Python, really performant but some issues with huge docs)

I tried the huge Jina AI regex, but it failed for my (admittedly messy) documents, e.g. from EUR-LEX. Their free segmenter API is really cool but unfortunately times out on my huge docs (~100 pages): https://jina.ai/segmenter/

Also, I tried to write a Vanilla JS chunker with a simple, adjustable hierarchical logic (inspired from the above). I think it does a decent job for the few lines of code: https://do-me.github.io/js-text-chunker/

Happy to hear your thoughts!
  • 1 reply
Β·
posted an update 12 days ago
view post
Post
3123
SemanticFinder now supports WebGPU thanks to @Xenova 's efforts with transformers.js v3!
Expect massive performance gains. Inferenced a whole book with 46k chunks in <5min. If your device doesn't support #WebGPU use the classic Wasm-based version:
- WebGPU: https://do-me.github.io/SemanticFinder/webgpu/
- Wasm: https://do-me.github.io/SemanticFinder/

WebGPU harnesses the full power of your hardware, no longer being restricted to just the CPU. The speedup is significant (4-60x) for all kinds of devices: consumer-grade laptops, heavy Nvidia GPU setups or Apple Silicon. Measure the difference for your device here: Xenova/webgpu-embedding-benchmark
Chrome currently works out of the box, Firefox requires some tweaking.

WebGPU + transformers.js allows to build amazing applications and make them accessible to everyone. E.g. SemanticFinder could become a simple GUI for populating your (vector) DB of choice. See the pre-indexed community texts here: do-me/SemanticFinder
Happy to hear your ideas!
  • 1 reply
Β·
replied to Xenova's post 23 days ago
view reply

This is absolutely amazing, the speedup is so insane and makes on-device AI much more accessible. Thank you so so much for this!

It would be great to have some kind of "auto" mode for the device param so that devices supporting webGPU use it right away. Anyway, happily waiting for the docs/blog :)

posted an update 4 months ago
view post
Post
1158
Hey HuggingFace, love your open source attitude and particularly transformers.js for embedding models! Your current integration "use this model" gives you the transformers.js code, but there is no quick way to really test a model in one click.
SemanticFinder ( do-me/SemanticFinder) offers such an integration for all compatible feature-extraction models! All you need to do is add a URL parameter with the model ID to it, like so: https://do-me.github.io/SemanticFinder/?model=Xenova/bge-small-en-v1.5. You can also decide between quantized and normal mode with https://do-me.github.io/SemanticFinder/?model=Xenova/bge-small-en-v1.5&quantized=false. Maybe that would do for a HF integration?
I know it's a small open source project, but I really believe that it provides value for devs before deciding for one model or the other. Also, it's much easier than having to spin up a notebook, install dependencies etc.. It's private, so you could even do some real-world evaluation on personal data without having to worry about third-party services data policies.
Happy to hear the community's thoughts!
  • 1 reply
Β·
posted an update 4 months ago
view post
Post
1525
Get daily/weekly/monthly notifications about latest trending feature-extraction models compatible with transformers.js for semantic search! All open source built on GitHub Actions and ntfy.sh.

I'm also providing daily updated tables (filterable and sortable by onnx model size too!) if you want to have a look only once in a while. Download what suits you best: csv, xlsx, parquet, json, html.

Would you like to monitor other models/tags? Feel free to open a PR :)

GitHub: https://github.com/do-me/trending-huggingface-models
Ntfy.sh daily channel: https://ntfy.sh/feature_extraction_transformers_js_models_daily
Sortable table: https://do-me.github.io/trending-huggingface-models/

And the best part: all 145 models are integrated in SemanticFinder to play around with https://do-me.github.io/SemanticFinder/!

replied to their post 5 months ago
view reply

Thanks a lot for your answer, this is confusing. Apparently the other=feature-extraction covers all pipeline_tag=feature-extraction as well. There are many popular models tagged in the same way, like https://huggingface.co/Snowflake/snowflake-arctic-embed-xs, which might remain in the dark if you're looking for them this way.

It's 137 vs. 159 models which makes a big difference! It seems indeed that this is the model's authors choice where to tag but it rather seems a mistake here. Maybe HF might want to improve this UI-wise?

fyi @thenlper

posted an update 5 months ago
view post
Post
2337
Question: HF model search not showing all results

I noticed that when I use the HF model search with these tags:
- feature-extraction
- transformers.js
it is not showing all models that are actually tagged.

Example: All Alibaba-NLP models (e.g. gte family) are correctly tagged but they don't show here
- https://huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js&sort=trending&search=gte
- correctly tagged model Alibaba-NLP/gte-large-en-v1.5

Does anyone know why?

fyi @Xenova
  • 3 replies
Β·
posted an update 6 months ago
view post
Post
1899
Hey, I just added three useful advanced use cases to do-me/SemanticFinder.
SemanticFinder is a collection of embeddings for public documents or books. You can create your own index file from any text or pdf and save it without installing or downloading anything. Try yourself:

1. Translating from 100+ languages to English (even though it might confuse a strawberry with a grapefruit ;D): https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_70320cde&firstOnly=true&inferencingActive=False
2. Finding English synonyms: https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&firstOnly=true&inferencingActive=False
3. The "universal index idea": create an embedding index with 30k English words and reuse it on unseen texts. You can decide to fill the gaps in the index by additional inferencing or just stick to the 30k index for instant semantic similarity.
Initial idea: https://github.com/do-me/SemanticFinder/discussions/48
Try here: https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&inferencingActive=False&universalIndexSettingsWordLevel with a text of your choice.

This could be enhanced by adding duplets or triplets like "climate change" or "green house gas". Eventually I'd like to set up vector DB integrations.

Super happy to hear your feedback, ideas and maybe even contributions! :)

---
Edit: Apparently markdown url formatting does only work for HF links.