Husain commited on
Commit
9043d5b
1 Parent(s): d2892f4

Upload 8 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - de
5
+ - fr
6
+ - it
7
+ - nl
8
+ - multilingual
9
+ tags:
10
+ - punctuation prediction
11
+ - punctuation
12
+ datasets:
13
+ - wmt/europarl
14
+ - SoNaR
15
+ license: mit
16
+ widget:
17
+ - text: "Ho sentito che ti sei laureata il che mi fa molto piacere"
18
+ example_title: "Italian"
19
+ - text: "Tous les matins vers quatre heures mon père ouvrait la porte de ma chambre"
20
+ example_title: "French"
21
+ - text: "Ist das eine Frage Frau Müller"
22
+ example_title: "German"
23
+ - text: "My name is Clara and I live in Berkeley California"
24
+ example_title: "English"
25
+ - text: "hervatting van de zitting ik verklaar de zitting van het europees parlement die op vrijdag 17 december werd onderbroken te zijn hervat"
26
+ example_title: "Dutch"
27
+ metrics:
28
+ - f1
29
+ ---
30
+
31
+
32
+
33
+ This model predicts the punctuation of English, Italian, French and German texts. We developed it to restore the punctuation of transcribed spoken language.
34
+
35
+ This multilanguage model was trained on the [Europarl Dataset](https://huggingface.co/datasets/wmt/europarl) provided by the [SEPP-NLG Shared Task](https://sites.google.com/view/sentence-segmentation) and for the Dutch language we included the [SoNaR Dataset](http://hdl.handle.net/10032/tm-a2-h5). *Please note that this dataset consists of political speeches. Therefore the model might perform differently on texts from other domains.*
36
+
37
+ The model restores the following punctuation markers: **"." "," "?" "-" ":"**
38
+ ## Sample Code
39
+ We provide a simple python package that allows you to process text of any length.
40
+
41
+ ## Install
42
+
43
+ To get started install the package from [pypi](https://pypi.org/project/deepmultilingualpunctuation/):
44
+
45
+ ```bash
46
+ pip install deepmultilingualpunctuation
47
+ ```
48
+ ### Restore Punctuation
49
+ ```python
50
+ from deepmultilingualpunctuation import PunctuationModel
51
+
52
+ model = PunctuationModel(model="oliverguhr/fullstop-punctuation-multilingual-sonar-base")
53
+ text = "My name is Clara and I live in Berkeley California Ist das eine Frage Frau Müller"
54
+ result = model.restore_punctuation(text)
55
+ print(result)
56
+ ```
57
+
58
+ **output**
59
+ > My name is Clara and I live in Berkeley, California. Ist das eine Frage, Frau Müller?
60
+
61
+
62
+ ### Predict Labels
63
+ ```python
64
+ from deepmultilingualpunctuation import PunctuationModel
65
+
66
+ model = PunctuationModel(model="oliverguhr/fullstop-punctuation-multilingual-sonar-base")
67
+ text = "My name is Clara and I live in Berkeley California Ist das eine Frage Frau Müller"
68
+ clean_text = model.preprocess(text)
69
+ labled_words = model.predict(clean_text)
70
+ print(labled_words)
71
+ ```
72
+
73
+ **output**
74
+
75
+ > [['My', '0', 0.99998856], ['name', '0', 0.9999708], ['is', '0', 0.99975926], ['Clara', '0', 0.6117834], ['and', '0', 0.9999014], ['I', '0', 0.9999808], ['live', '0', 0.9999666], ['in', '0', 0.99990165], ['Berkeley', ',', 0.9941764], ['California', '.', 0.9952892], ['Ist', '0', 0.9999577], ['das', '0', 0.9999678], ['eine', '0', 0.99998224], ['Frage', ',', 0.9952265], ['Frau', '0', 0.99995995], ['Müller', '?', 0.972517]]
76
+
77
+
78
+
79
+ ## Results
80
+
81
+ The performance differs for the single punctuation markers as hyphens and colons, in many cases, are optional and can be substituted by either a comma or a full stop. The model achieves the following F1 scores for the different languages:
82
+
83
+ | Label | English | German | French|Italian| Dutch |
84
+ | ------------- | -------- | ------ | ----- | ----- | ----- |
85
+ | 0 | 0.990 | 0.996 | 0.991 | 0.988 | 0.994 |
86
+ | . | 0.924 | 0.951 | 0.921 | 0.917 | 0.959 |
87
+ | ? | 0.825 | 0.829 | 0.800 | 0.736 | 0.817 |
88
+ | , | 0.798 | 0.937 | 0.811 | 0.778 | 0.813 |
89
+ | : | 0.535 | 0.608 | 0.578 | 0.544 | 0.657 |
90
+ | - | 0.345 | 0.384 | 0.353 | 0.344 | 0.464 |
91
+ | macro average | 0.736 | 0.784 | 0.742 | 0.718 | 0.784 |
92
+ | micro average | 0.975 | 0.987 | 0.977 | 0.972 | 0.983 |
93
+
94
+ ## Languages
95
+
96
+ ### Models
97
+
98
+ | Languages | Model |
99
+ | ------------------------------------------ | ------------------------------------------------------------ |
100
+ | English, Italian, French and German | [oliverguhr/fullstop-punctuation-multilang-large](https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large) |
101
+ | English, Italian, French, German and Dutch | [oliverguhr/fullstop-punctuation-multilingual-sonar-base](https://huggingface.co/oliverguhr/fullstop-punctuation-multilingual-sonar-base) |
102
+ | Dutch | [oliverguhr/fullstop-dutch-sonar-punctuation-prediction](https://huggingface.co/oliverguhr/fullstop-dutch-sonar-punctuation-prediction) |
103
+
104
+ ### Community Models
105
+
106
+ | Languages | Model |
107
+ | ------------------------------------------ | ------------------------------------------------------------ |
108
+ |English, German, French, Spanish, Bulgarian, Italian, Polish, Dutch, Czech, Portugese, Slovak, Slovenian| [kredor/punctuate-all](https://huggingface.co/kredor/punctuate-all) |
109
+ | Catalan | [softcatala/fullstop-catalan-punctuation-prediction](https://huggingface.co/softcatala/fullstop-catalan-punctuation-prediction) |
110
+
111
+ You can use different models by setting the model parameter:
112
+
113
+ ```python
114
+ model = PunctuationModel(model = "oliverguhr/fullstop-dutch-punctuation-prediction")
115
+ ```
116
+
117
+
118
+ ## How to cite us
119
+
120
+ ```
121
+ @article{guhr-EtAl:2021:fullstop,
122
+ title={FullStop: Multilingual Deep Models for Punctuation Prediction},
123
+ author = {Guhr, Oliver and Schumann, Anne-Kathrin and Bahrmann, Frank and Böhme, Hans Joachim},
124
+ booktitle = {Proceedings of the Swiss Text Analytics Conference 2021},
125
+ month = {June},
126
+ year = {2021},
127
+ address = {Winterthur, Switzerland},
128
+ publisher = {CEUR Workshop Proceedings},
129
+ url = {http://ceur-ws.org/Vol-2957/sepp_paper4.pdf}
130
+ }
131
+
132
+ ```
133
+
134
+ ```
135
+ @misc{https://doi.org/10.48550/arxiv.2301.03319,
136
+ doi = {10.48550/ARXIV.2301.03319},
137
+ url = {https://arxiv.org/abs/2301.03319},
138
+ author = {Vandeghinste, Vincent and Guhr, Oliver},
139
+ keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7},
140
+ title = {FullStop:Punctuation and Segmentation Prediction for Dutch with Transformers},
141
+ publisher = {arXiv},
142
+ year = {2023},
143
+ copyright = {Creative Commons Attribution Share Alike 4.0 International}
144
+ }
145
+
146
+ ```
147
+
148
+
config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "xlm-roberta-base",
3
+ "architectures": [
4
+ "XLMRobertaForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "0",
15
+ "1": ".",
16
+ "2": ",",
17
+ "3": "?",
18
+ "4": "-",
19
+ "5": ":"
20
+ },
21
+ "initializer_range": 0.02,
22
+ "intermediate_size": 3072,
23
+ "label2id": {
24
+ "0": 0,
25
+ ".": 1,
26
+ ",": 2,
27
+ "?": 3,
28
+ "-": 4,
29
+ ":": 5
30
+ },
31
+ "layer_norm_eps": 1e-05,
32
+ "max_position_embeddings": 514,
33
+ "model_type": "xlm-roberta",
34
+ "num_attention_heads": 12,
35
+ "num_hidden_layers": 12,
36
+ "output_past": true,
37
+ "pad_token_id": 1,
38
+ "position_embedding_type": "absolute",
39
+ "torch_dtype": "float32",
40
+ "transformers_version": "4.18.0",
41
+ "type_vocab_size": 1,
42
+ "use_cache": true,
43
+ "vocab_size": 250002
44
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f507df03a7511f5c681809fac1316fc5af93e6ee553a4647e8ed16c68a0ffe8
3
+ size 1109858936
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a21e4f48b6e132aba56ccd0e156ea5963867e55e49c73a12d3049adc7fa162e9
3
+ size 1109901745
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2c509a525eb51aebb33fb59c24ee923c1d4c1db23c3ae81fe05ccf354084f7b
3
+ size 17082758
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "strip_accent": false, "add_prefix_space": true, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "xlm-roberta-base", "tokenizer_class": "XLMRobertaTokenizer"}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39e2136cf1f522fec134a37b014f35805bc522189fc90d09abac212a2aebd150
3
+ size 3119