crash_encoder1-sts / README.md
gkudirka's picture
Add new SentenceTransformer model.
f46eb21 verified
---
language: []
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dataset_size:100K<n<1M
- loss:CoSENTLoss
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
base_model: distilbert/distilbert-base-uncased
widget:
- source_sentence: T L 2 DUMMY CHEST LAT WIDEBAND 90 Deg Front 2020 CX482 G-S
sentences:
- T L F DUMMY CHEST LAT WIDEBAND 90 Deg Front 2020.5 U625 G-S
- T L F DUMMY HEAD CG LAT WIDEBAND Static Airbag OOP Test 2025 CX430 G-S
- T R F DUMMY PELVIS LAT WIDEBAND 90 Deg Frontal Impact Simulation 2026 P800 G-S
- source_sentence: T L F DUMMY CHEST LONG WIDEBAND 90 Deg Front 2022 U553 G-S
sentences:
- T R F TORSO BELT AT D RING LOAD WIDEBAND 90 Deg Front 2022 U553 LBF
- T L F DUMMY L UP TIBIA MY LOAD WIDEBAND 90 Deg Front 2015 P552 IN-LBS
- T R F DUMMY R UP TIBIA FX LOAD WIDEBAND 30 Deg Front Angular Left 2022 U554 LBF
- source_sentence: T R F DUMMY PELVIS LAT WIDEBAND 90 Deg Front 2019 D544 G-S
sentences:
- T L F DUMMY PELVIS LAT WIDEBAND 90 Deg Front 2015 P552 G-S
- T L LOWER CONTROL ARM VERT WIDEBAND Left Side Drop Test 2024.5 P702 G-S
- F BARRIER PLATE 11030 SZ D FX LOAD WIDEBAND 90 Deg Front 2015 P552 LBF
- source_sentence: T ENGINE ENGINE TOP LAT WIDEBAND 90 Deg Front 2015 P552 G-S
sentences:
- T R ENGINE TRANS BOTTOM LAT WIDEBAND 90 Deg Front 2015 P552 G-S
- F BARRIER PLATE 09030 SZ D FX LOAD WIDEBAND 90 Deg Front 2015 P552 LBF
- T R F DUMMY NECK UPPER MX LOAD WIDEBAND 90 Deg Front 2022 U554 IN-LBS
- source_sentence: T L F DUMMY CHEST LAT WIDEBAND 90 Deg Front 2020 CX482 G-S
sentences:
- T R F DUMMY CHEST LAT WIDEBAND 90 Deg Front 2025 V363N G-S
- T R F DUMMY HEAD CG VERT WIDEBAND VIA Linear Impact Test 2021 C727 G-S
- T L F DUMMY T1 VERT WIDEBAND 75 Deg Oblique Left Side 10 in. Pole 2026 P800 G-S
pipeline_tag: sentence-similarity
model-index:
- name: SentenceTransformer based on distilbert/distilbert-base-uncased
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.27051173706186693
name: Pearson Cosine
- type: spearman_cosine
value: 0.2798593637893599
name: Spearman Cosine
- type: pearson_manhattan
value: 0.228702027931258
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.25353345676390787
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.23018017587211453
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.2550481010151111
name: Spearman Euclidean
- type: pearson_dot
value: 0.2125353301405465
name: Pearson Dot
- type: spearman_dot
value: 0.1902748420981738
name: Spearman Dot
- type: pearson_max
value: 0.27051173706186693
name: Pearson Max
- type: spearman_max
value: 0.2798593637893599
name: Spearman Max
- type: pearson_cosine
value: 0.26319176781258086
name: Pearson Cosine
- type: spearman_cosine
value: 0.2721909587247752
name: Spearman Cosine
- type: pearson_manhattan
value: 0.21766215319708615
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.2439514548051345
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.2195389492634635
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.24629153092425862
name: Spearman Euclidean
- type: pearson_dot
value: 0.21073878591545503
name: Pearson Dot
- type: spearman_dot
value: 0.1864889259868287
name: Spearman Dot
- type: pearson_max
value: 0.26319176781258086
name: Pearson Max
- type: spearman_max
value: 0.2721909587247752
name: Spearman Max
---
# SentenceTransformer based on distilbert/distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) <!-- at revision 12040accade4e8a0f71eabdb258fecc2e7e948be -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'T L F DUMMY CHEST LAT WIDEBAND 90 Deg Front 2020 CX482 G-S',
'T R F DUMMY CHEST LAT WIDEBAND 90 Deg Front 2025 V363N G-S',
'T R F DUMMY HEAD CG VERT WIDEBAND VIA Linear Impact Test 2021 C727 G-S',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.2705 |
| **spearman_cosine** | **0.2799** |
| pearson_manhattan | 0.2287 |
| spearman_manhattan | 0.2535 |
| pearson_euclidean | 0.2302 |
| spearman_euclidean | 0.255 |
| pearson_dot | 0.2125 |
| spearman_dot | 0.1903 |
| pearson_max | 0.2705 |
| spearman_max | 0.2799 |
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.2632 |
| **spearman_cosine** | **0.2722** |
| pearson_manhattan | 0.2177 |
| spearman_manhattan | 0.244 |
| pearson_euclidean | 0.2195 |
| spearman_euclidean | 0.2463 |
| pearson_dot | 0.2107 |
| spearman_dot | 0.1865 |
| pearson_max | 0.2632 |
| spearman_max | 0.2722 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 481,114 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 16 tokens</li><li>mean: 32.14 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 32.62 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:--------------------------------|
| <code>T L C PLR SM SCS L2 HY REF 053 LAT WIDEBAND 75 Deg Oblique Left Side 10 in. Pole 2018 P558 G-S</code> | <code>T PCM PWR POWER TO PCM VOLT 2 SEC WIDEBAND 75 Deg Oblique Left Side 10 in. Pole 2020 V363N VOLTS</code> | <code>0.5198143220305642</code> |
| <code>T L F DUMMY L_FEMUR MX LOAD WIDEBAND 90 Deg Frontal Impact Simulation MY2025 U717 IN-LBS</code> | <code>B L FRAME AT No 1 X MEM LAT WIDEBAND Inline 25% Left Front Offset Vehicle to Vehicle 2021 P702 G-S</code> | <code>0.5214072221695696</code> |
| <code>T R F DOOR REAR OF SEAT H PT LAT WIDEBAND 75 Deg Oblique Right Side 10 in. Pole 2015 P552 G-S</code> | <code>T SCS R2 HY BOS A12 008 TAP RIGHT C PILLAR VOLT WIDEBAND 30 Deg Front Angular Right 2021 CX727 VOLTS</code> | <code>0.322173496575591</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 103,097 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 17 tokens</li><li>mean: 31.98 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 31.96 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:----------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>T R F DUMMY NECK UPPER MZ LOAD WIDEBAND 90 Deg Frontal Impact Simulation 2026 GENERIC IN-LBS</code> | <code>T R ROCKER AT C PILLAR LAT WIDEBAND 90 Deg Front 2021 P702 G-S</code> | <code>0.5234504780172093</code> |
| <code>T L ROCKER AT B_PILLAR VERT WIDEBAND 90 Deg Front 2024.5 P702 G-S</code> | <code>T RCM BTWN SEATS LOW G Z RCM C1 LZ ALV RC7 003 VOLT WIDEBAND 75 Deg Oblique Left Side 10 in. Pole 2018 P558 VOLTS</code> | <code>0.36805699821563936</code> |
| <code>T R FRAME AT C_PILLAR LONG WIDEBAND 90 Deg Left Side IIHS MDB to Vehicle 2024.5 P702 G-S</code> | <code>T L F LAP BELT AT ANCHOR LOAD WIDEBAND 90 DEG / LEFT SIDE DECEL-3G 2021 P702 LBF</code> | <code>0.5309750606095435</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 32
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 32
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 7
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: False
- `include_tokens_per_second`: False
- `neftune_noise_alpha`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine |
|:-------:|:-----:|:-------------:|:------:|:-----------------------:|
| 1.0650 | 1000 | 7.6111 | 7.5503 | 0.4087 |
| 2.1299 | 2000 | 7.5359 | 7.5420 | 0.4448 |
| 3.1949 | 3000 | 7.5232 | 7.5292 | 0.4622 |
| 4.2599 | 4000 | 7.5146 | 7.5218 | 0.4779 |
| 5.3248 | 5000 | 7.5045 | 7.5200 | 0.4880 |
| 6.3898 | 6000 | 7.4956 | 7.5191 | 0.4934 |
| 7.4547 | 7000 | 7.4873 | 7.5170 | 0.4967 |
| 8.5197 | 8000 | 7.4781 | 7.5218 | 0.4931 |
| 9.5847 | 9000 | 7.4686 | 7.5257 | 0.4961 |
| 10.6496 | 10000 | 7.4596 | 7.5327 | 0.4884 |
| 11.7146 | 11000 | 7.4498 | 7.5403 | 0.4860 |
| 12.7796 | 12000 | 7.4386 | 7.5507 | 0.4735 |
| 13.8445 | 13000 | 7.4253 | 7.5651 | 0.4660 |
| 14.9095 | 14000 | 7.4124 | 7.5927 | 0.4467 |
| 15.9744 | 15000 | 7.3989 | 7.6054 | 0.4314 |
| 17.0394 | 16000 | 7.3833 | 7.6654 | 0.4163 |
| 18.1044 | 17000 | 7.3669 | 7.7186 | 0.3967 |
| 19.1693 | 18000 | 7.3519 | 7.7653 | 0.3779 |
| 20.2343 | 19000 | 7.3349 | 7.8356 | 0.3651 |
| 21.2993 | 20000 | 7.3191 | 7.8772 | 0.3495 |
| 22.3642 | 21000 | 7.3032 | 7.9346 | 0.3412 |
| 23.4292 | 22000 | 7.2873 | 7.9624 | 0.3231 |
| 24.4941 | 23000 | 7.2718 | 8.0169 | 0.3161 |
| 25.5591 | 24000 | 7.2556 | 8.0633 | 0.3050 |
| 26.6241 | 25000 | 7.2425 | 8.1021 | 0.2958 |
| 27.6890 | 26000 | 7.2278 | 8.1563 | 0.2954 |
| 28.7540 | 27000 | 7.2124 | 8.1955 | 0.2882 |
| 29.8190 | 28000 | 7.2014 | 8.2234 | 0.2821 |
| 30.8839 | 29000 | 7.1938 | 8.2447 | 0.2792 |
| 31.9489 | 30000 | 7.1811 | 8.2609 | 0.2799 |
| 32.0 | 30048 | - | - | 0.2722 |
### Framework Versions
- Python: 3.10.6
- Sentence Transformers: 3.0.0
- Transformers: 4.35.0
- PyTorch: 2.1.0a0+4136153
- Accelerate: 0.30.1
- Datasets: 2.14.1
- Tokenizers: 0.14.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->