metadata
base_model: sentence-transformers/paraphrase-MiniLM-L6-v2
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:87757
- loss:CoSENTLoss
widget:
- source_sentence: buenos aires berazategui calle 22 desde 3801 hasta 3899
sentences:
- buenos aires berazategui bullrich desde 3801 hasta 3899
- >-
capital federal general pueyrredon mar del plata juan jose castelli
desde 8502 hasta 8600
- >-
buenos aires general pueyrredon mar del plata bravo desde 2001 hasta
2099
- source_sentence: >-
capital federal ciudad autonoma buenos aires arenales desde 3402 hasta
3500
sentences:
- >-
capital federal ciudad autonoma buenos aires arenales desde 3702 hasta
3800
- buenos aires moreno pablo acosta desde 401 hasta 499
- >-
buenos aires valle hermoso mar del plata tripulantes del fournier desde
4001 hasta 4099
- source_sentence: buenos aires la matanza la tablada irigoyen desde 1001 hasta 1099
sentences:
- santiago del estero lomas de zamora a lugano desde 502 hasta 600
- buenos aires lomas de zamora ingeniero budge mayor eduardo olivero 3400
- buenos aires la matanza la tablada irigoyen 2599
- source_sentence: buenos aires avellaneda villa dominico alberto barcelo desde 302 hasta 400
sentences:
- >-
buenos aires avellaneda villa dominico barcelo alberto desde 302 hasta
400
- buenos aires hurlingham concepcion arenal desde 6902 hasta 7000
- buenos aires la tablada pje laplace desde 301 hasta 399
- source_sentence: >-
buenos aires general pueyrredon mar del plata av patricio peralta ramos
desde 6101 hasta 6199
sentences:
- bahia blanca buenos aires estacion algarrobo desde 1301 hasta 1399
- >-
buenos aires general pueyrredon mar del plata ing c chapeaurouge desde
6101 hasta 6199
- >-
buenos aires general pueyrredon mar del plata pje jacaranda desde 4001
hasta 4099
SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/paraphrase-MiniLM-L6-v2
- Maximum Sequence Length: 128 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomasravel/modelo_finetuneadoX2")
# Run inference
sentences = [
'buenos aires general pueyrredon mar del plata av patricio peralta ramos desde 6101 hasta 6199',
'buenos aires general pueyrredon mar del plata ing c chapeaurouge desde 6101 hasta 6199',
'buenos aires general pueyrredon mar del plata pje jacaranda desde 4001 hasta 4099',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 87,757 training samples
- Columns:
sentence_0
,sentence_1
, andlabel
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 13 tokens
- mean: 21.0 tokens
- max: 29 tokens
- min: 8 tokens
- mean: 19.59 tokens
- max: 30 tokens
- min: 0.5
- mean: 0.77
- max: 1.0
- Samples:
sentence_0 sentence_1 label buenos aires general pueyrredon mar del plata p albarracin desde 1902 hasta 2000
buenos aires general pueyrredon mar del plata albarracin paula desde 1902 hasta 2000
1.0
buenos aires berazategui calle 11 desde 2001 hasta 2099
capital federal berazategui calle 11 desde 2001 hasta 2099
0.72
buenos aires bahia blanca gral alvear desde 1901 hasta 1999
buenos aires bahia blanca gral alvear 1974
1.0
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 16per_device_eval_batch_size
: 16multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss |
---|---|---|
0.0912 | 500 | 4.2287 |
0.1823 | 1000 | 3.6868 |
0.2735 | 1500 | 3.4965 |
0.3646 | 2000 | 3.3966 |
0.4558 | 2500 | 3.3262 |
0.5469 | 3000 | 3.2206 |
0.6381 | 3500 | 3.1346 |
0.7293 | 4000 | 3.0975 |
0.8204 | 4500 | 2.988 |
0.9116 | 5000 | 3.0538 |
1.0027 | 5500 | 2.9717 |
1.0939 | 6000 | 2.9248 |
1.1851 | 6500 | 2.8625 |
1.2762 | 7000 | 2.8606 |
1.3674 | 7500 | 2.762 |
1.4585 | 8000 | 2.8183 |
1.5497 | 8500 | 2.705 |
1.6408 | 9000 | 2.7019 |
1.7320 | 9500 | 2.623 |
1.8232 | 10000 | 2.6409 |
1.9143 | 10500 | 2.709 |
2.0055 | 11000 | 2.6223 |
2.0966 | 11500 | 2.6085 |
2.1878 | 12000 | 2.6152 |
2.2789 | 12500 | 2.5679 |
2.3701 | 13000 | 2.533 |
2.4613 | 13500 | 2.5537 |
2.5524 | 14000 | 2.5063 |
2.6436 | 14500 | 2.4698 |
2.7347 | 15000 | 2.4349 |
2.8259 | 15500 | 2.4058 |
2.9170 | 16000 | 2.5143 |
Framework Versions
- Python: 3.9.12
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.2.2
- Accelerate: 0.34.2
- Datasets: 2.21.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}