CrossEncoder
Hi! nice work!
I'm wondering if this model is suitable for CrossEncoder. I'm trying a basic examples but i do not get good result. Does it need fine-tuning?
I want to rerank some result from an hybrid search.
Thanks!
Thanks for your interest! Unfortunately, the model is not suitable for use as as CrossEncoder.
Even with finetuning, it might be very challenging to use it that way, because reranking requires comparing two concepts in the same query, which is not something the model was trained to do.
Assuming you don't have too many potential matches after your hybrid search, my suggestion would be to use an LLM as reranker, potentially after providing a description of the concepts. If you have too many matches for using an LLM as crossencoder, you can use BioLORD as a second-stage bi-encoder to filter or rerank your hybrid results.
Hi! I want to ask you an other question. What do you think is the best way to fine-tuning the model?
Using the sentence_transformers
library, whose v3 was just released.