samchain commited on
Commit
0f4159f
1 Parent(s): d749c58

Upload TFBertForPreTraining

Browse files
Files changed (3) hide show
  1. README.md +49 -0
  2. config.json +1 -2
  3. tf_model.h5 +3 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: bert-base-uncased
4
+ tags:
5
+ - generated_from_keras_callback
6
+ model-index:
7
+ - name: EconoBert
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
+ probably proofread and complete it, then remove this comment. -->
13
+
14
+ # EconoBert
15
+
16
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
38
+ - training_precision: float32
39
+
40
+ ### Training results
41
+
42
+
43
+
44
+ ### Framework versions
45
+
46
+ - Transformers 4.31.0
47
+ - TensorFlow 2.12.0
48
+ - Datasets 2.13.1
49
+ - Tokenizers 0.13.3
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "/EconoBert_tf",
3
  "architectures": [
4
  "BertForPreTraining"
5
  ],
@@ -18,7 +18,6 @@
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
- "torch_dtype": "float32",
22
  "transformers_version": "4.31.0",
23
  "type_vocab_size": 2,
24
  "use_cache": true,
 
1
  {
2
+ "_name_or_path": "bert-base-uncased",
3
  "architectures": [
4
  "BertForPreTraining"
5
  ],
 
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
 
21
  "transformers_version": "4.31.0",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c09ff384b7384c160c845c913103f5e5a3d9a2296a24042499c18c411fb1bb3d
3
+ size 536063432