Edit model card

SentenceTransformer based on l3cube-pune/indic-sentence-similarity-sbert

This is a sentence-transformers model finetuned from l3cube-pune/indic-sentence-similarity-sbert on the sentence-transformers/all-nli dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ammumadhu/indic-bert-nli-matryoshka")
# Run inference
sentences = [
    'Then he ran.',
    'He then started to run.',
    'A man plays the flute.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8609
spearman_cosine 0.8663
pearson_manhattan 0.8587
spearman_manhattan 0.8612
pearson_euclidean 0.8585
spearman_euclidean 0.8611
pearson_dot 0.8259
spearman_dot 0.826
pearson_max 0.8609
spearman_max 0.8663

Semantic Similarity

Metric Value
pearson_cosine 0.8594
spearman_cosine 0.8649
pearson_manhattan 0.8574
spearman_manhattan 0.8599
pearson_euclidean 0.8575
spearman_euclidean 0.8601
pearson_dot 0.8223
spearman_dot 0.8227
pearson_max 0.8594
spearman_max 0.8649

Semantic Similarity

Metric Value
pearson_cosine 0.8506
spearman_cosine 0.8576
pearson_manhattan 0.8528
spearman_manhattan 0.8553
pearson_euclidean 0.8527
spearman_euclidean 0.8551
pearson_dot 0.7944
spearman_dot 0.7964
pearson_max 0.8528
spearman_max 0.8576

Semantic Similarity

Metric Value
pearson_cosine 0.8411
spearman_cosine 0.8505
pearson_manhattan 0.8462
spearman_manhattan 0.849
pearson_euclidean 0.8458
spearman_euclidean 0.8487
pearson_dot 0.7756
spearman_dot 0.7756
pearson_max 0.8462
spearman_max 0.8505

Semantic Similarity

Metric Value
pearson_cosine 0.8177
spearman_cosine 0.8308
pearson_manhattan 0.8292
spearman_manhattan 0.832
pearson_euclidean 0.8311
spearman_euclidean 0.8334
pearson_dot 0.7153
spearman_dot 0.7181
pearson_max 0.8311
spearman_max 0.8334

Semantic Similarity

Metric Value
pearson_cosine 0.8492
spearman_cosine 0.8569
pearson_manhattan 0.8572
spearman_manhattan 0.8566
pearson_euclidean 0.8569
spearman_euclidean 0.8567
pearson_dot 0.7969
spearman_dot 0.7879
pearson_max 0.8572
spearman_max 0.8569

Semantic Similarity

Metric Value
pearson_cosine 0.8507
spearman_cosine 0.8575
pearson_manhattan 0.8564
spearman_manhattan 0.856
pearson_euclidean 0.8562
spearman_euclidean 0.8561
pearson_dot 0.7973
spearman_dot 0.7873
pearson_max 0.8564
spearman_max 0.8575

Semantic Similarity

Metric Value
pearson_cosine 0.8467
spearman_cosine 0.8523
pearson_manhattan 0.8516
spearman_manhattan 0.8516
pearson_euclidean 0.8506
spearman_euclidean 0.8504
pearson_dot 0.7757
spearman_dot 0.7687
pearson_max 0.8516
spearman_max 0.8523

Semantic Similarity

Metric Value
pearson_cosine 0.8377
spearman_cosine 0.8472
pearson_manhattan 0.8466
spearman_manhattan 0.8488
pearson_euclidean 0.8456
spearman_euclidean 0.8472
pearson_dot 0.7503
spearman_dot 0.7416
pearson_max 0.8466
spearman_max 0.8488

Semantic Similarity

Metric Value
pearson_cosine 0.8174
spearman_cosine 0.8316
pearson_manhattan 0.832
spearman_manhattan 0.8347
pearson_euclidean 0.8335
spearman_euclidean 0.8351
pearson_dot 0.6935
spearman_dot 0.6844
pearson_max 0.8335
spearman_max 0.8351

Training Details

Training Dataset

sentence-transformers/all-nli

  • Dataset: sentence-transformers/all-nli at d482672
  • Size: 10,000 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 18.8 tokens
    • max: 89 tokens
    • min: 4 tokens
    • mean: 11.84 tokens
    • max: 36 tokens
    • min: 4 tokens
    • mean: 12.39 tokens
    • max: 38 tokens
  • Samples:
    anchor positive negative
    Side view of a female triathlete during the run. A woman runs A man sits
    Confused person standing in the middle of the trolley tracks trying to figure out the signs. A person is on the tracks. A man sits in an airplane.
    A woman in a black shirt, jean shorts and white tennis shoes is bowling. A woman is bowling in casual clothes A woman bowling wins an outfit of clothes
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Evaluation Dataset

sentence-transformers/all-nli

  • Dataset: sentence-transformers/all-nli at d482672
  • Size: 6,584 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 18.54 tokens
    • max: 74 tokens
    • min: 4 tokens
    • mean: 9.97 tokens
    • max: 30 tokens
    • min: 5 tokens
    • mean: 10.59 tokens
    • max: 29 tokens
  • Samples:
    anchor positive negative
    Two women are embracing while holding to go packages. Two woman are holding packages. The men are fighting outside a deli.
    Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink. Two kids in numbered jerseys wash their hands. Two kids in jackets walk to school.
    A man selling donuts to a customer during a world exhibition event held in the city of Angeles A man selling donuts to a customer. A woman drinks her coffee in a small cafe.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss sts-dev-128_spearman_cosine sts-dev-256_spearman_cosine sts-dev-512_spearman_cosine sts-dev-64_spearman_cosine sts-dev-768_spearman_cosine sts-test-128_spearman_cosine sts-test-256_spearman_cosine sts-test-512_spearman_cosine sts-test-64_spearman_cosine sts-test-768_spearman_cosine
0.3797 30 7.9432 4.2806 0.8509 0.8570 0.8633 0.8311 0.8644 - - - - -
0.7595 60 6.1701 3.9498 0.8505 0.8576 0.8649 0.8308 0.8663 - - - - -
1.0 79 - - - - - - - 0.8472 0.8523 0.8575 0.8316 0.8569

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.0
  • Transformers: 4.41.1
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.30.1
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
4
Safetensors
Model size
238M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ammumadhu/indic-bert-nli-matryoshka

Finetuned
(2)
this model

Evaluation results