Text Classification
Transformers
PyTorch
English
roberta
fill-mask
finance
Inference Endpoints
Edit model card

We collects financial domain terms from Investopedia's Financia terms dictionary, NYSSCPA's accounting terminology guide and Harvey's Hypertextual Finance Glossary to expand RoBERTa's vocab dict.

Based on added-financial-terms RoBERTa, we pretrained our model on multilple financial corpus:

In continual pretraining step, we apply following experiments settings to achieve better finetuned results on Four Financial Datasets:

  1. Masking Probability: 0.4 (instead of default 0.15)
  2. Warmup Steps: 0 (deriving better results than models with warmup steps)
  3. Epochs: 1 (is enough in case of overfitting)
  4. weight_decay: 0.01
  5. Train Batch Size: 64
  6. FP16
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train SUFEHeisenberg/Fin-RoBERTa