Edit model card

BGE small finetuned BIOASQ

This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-small-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-small-bioasq-1epoch-batch32")
# Run inference
sentences = [
    'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
    'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
    'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8345
cosine_accuracy@3 0.9222
cosine_accuracy@5 0.942
cosine_accuracy@10 0.9576
cosine_precision@1 0.8345
cosine_precision@3 0.3074
cosine_precision@5 0.1884
cosine_precision@10 0.0958
cosine_recall@1 0.8345
cosine_recall@3 0.9222
cosine_recall@5 0.942
cosine_recall@10 0.9576
cosine_ndcg@10 0.901
cosine_mrr@10 0.8824
cosine_map@100 0.8834

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,012 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 3 tokens
    • mean: 63.38 tokens
    • max: 485 tokens
    • min: 5 tokens
    • mean: 16.13 tokens
    • max: 49 tokens
  • Samples:
    positive anchor
    Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma. What is the implication of histone lysine methylation in medulloblastoma?
    STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation. What is the role of STAG1/STAG2 proteins in differentiation?
    The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma. What is the association between cell phone use and glioblastoma?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss BAAI/bge-small-en-v1.5_cosine_map@100
0.0794 10 0.5344 -
0.1587 20 0.4615 -
0.2381 30 0.301 -
0.3175 40 0.2169 -
0.3968 50 0.1053 -
0.4762 60 0.1432 -
0.5556 70 0.1589 -
0.6349 80 0.1458 -
0.7143 90 0.1692 -
0.7937 100 0.1664 -
0.8730 110 0.1252 -
0.9524 120 0.1243 -
1.0 126 - 0.8858
0.0794 10 0.1393 -
0.1587 20 0.1504 -
0.2381 30 0.1009 -
0.3175 40 0.0689 -
0.3968 50 0.0301 -
0.4762 60 0.0647 -
0.5556 70 0.0748 -
0.6349 80 0.0679 -
0.7143 90 0.1091 -
0.7937 100 0.0953 -
0.8730 110 0.089 -
0.9524 120 0.0758 -
1.0 126 - 0.8878
0.0794 10 0.092 -
0.1587 20 0.0748 -
0.2381 30 0.0392 -
0.3175 40 0.014 -
0.3968 50 0.0057 -
0.4762 60 0.0208 -
0.5556 70 0.0173 -
0.6349 80 0.0195 -
0.7143 90 0.0349 -
0.7937 100 0.0483 -
0.8730 110 0.0254 -
0.9524 120 0.0325 -
1.0 126 - 0.8883
1.0317 130 0.0582 -
1.1111 140 0.0475 -
1.1905 150 0.0325 -
1.2698 160 0.0058 -
1.3492 170 0.0054 -
1.4286 180 0.0047 -
1.5079 190 0.0076 -
1.5873 200 0.0091 -
1.6667 210 0.0232 -
1.7460 220 0.0147 -
1.8254 230 0.0194 -
1.9048 240 0.0186 -
1.9841 250 0.0141 -
2.0 252 - 0.8857
2.0635 260 0.037 -
2.1429 270 0.0401 -
2.2222 280 0.0222 -
2.3016 290 0.0134 -
2.3810 300 0.008 -
2.4603 310 0.0199 -
2.5397 320 0.017 -
2.6190 330 0.0164 -
2.6984 340 0.0344 -
2.7778 350 0.0352 -
2.8571 360 0.0346 -
2.9365 370 0.0256 -
3.0 378 - 0.8868
0.7937 100 0.0064 0.8878
0.0794 10 0.003 0.8858
0.1587 20 0.0026 0.8811
0.2381 30 0.0021 0.8817
0.3175 40 0.0017 0.8818
0.3968 50 0.0015 0.8818
0.4762 60 0.0019 0.8814
0.5556 70 0.0019 0.8798
0.6349 80 0.0024 0.8811
0.7143 90 0.0029 0.8834
0.7937 100 0.006 0.8827
0.8730 110 0.0028 0.8827
0.9524 120 0.005 0.8834

Framework Versions

  • Python: 3.11.5
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
8
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for juanpablomesa/bge-small-bioasq-1epoch-batch32

Finetuned
(107)
this model

Evaluation results