Spaces:
Running
Running
Optimized embedding method in RAG
#37
by
Sushanta007
- opened
Hi Team,
I want to understand any other approaches for picking best embedding models for RAG apart from picking up GTE or E5 based embeddings.
Need few suggestions in this regard.
Hello!
Generally, the MTEB benchmark is used for picking embedding models. The retrieval task might be particularly interesting.
- Tom Aarsen
Hello Tom,
That's correct, but this is what I tried to understand.
Assuming I picked up 'e5-mistral-7b-instruct' LLM and it did not give the desired result. Then how do I select the next LLM because if I keep on running different LLMs, it might take me a long time. Is there any other method that LLMs can be picked up (let's say as per dataset etc.)?