glecorve commited on
Commit
8c8a754
β€’
1 Parent(s): 51b342e

Enriched README

Browse files
Files changed (1) hide show
  1. README.md +254 -0
README.md CHANGED
@@ -101,4 +101,258 @@ configs:
101
  path: data/test-*
102
  - split: validation
103
  path: data/validation-*
 
 
 
 
 
 
 
 
 
 
104
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
  path: data/test-*
102
  - split: validation
103
  path: data/validation-*
104
+ task_categories:
105
+ - conversational
106
+ - question-answering
107
+ tags:
108
+ - qa
109
+ - knowledge-graph
110
+ - sparql
111
+ - multi-hop
112
+ language:
113
+ - en
114
  ---
115
+
116
+ # Dataset Card for CSQA-SPARQLtoText
117
+
118
+ ## Table of Contents
119
+ - [Dataset Card for CSQA-SPARQLtoText](#dataset-card-for-csqa-sparqltotext)
120
+ - [Table of Contents](#table-of-contents)
121
+ - [Dataset Description](#dataset-description)
122
+ - [Dataset Summary](#dataset-summary)
123
+ - [Supported tasks](#supported-tasks)
124
+ - [Knowledge based question-answering](#knowledge-based-question-answering)
125
+ - [SPARQL queries and natural language questions](#sparql-queries-and-natural-language-questions)
126
+ - [Languages](#languages)
127
+ - [Dataset Structure](#dataset-structure)
128
+ - [Types of questions](#types-of-questions)
129
+ - [Data splits](#data-splits)
130
+ - [JSON fields](#json-fields)
131
+ - [Original fields](#original-fields)
132
+ - [New fields](#new-fields)
133
+ - [Verbalized fields](#verbalized-fields)
134
+ - [Format of the SPARQL queries](#format-of-the-sparql-queries)
135
+ - [Additional Information](#additional-information)
136
+ - [Licensing Information](#licensing-information)
137
+ - [Citation Information](#citation-information)
138
+ - [This version of the corpus (with SPARQL queries)](#this-version-of-the-corpus-with-sparql-queries)
139
+ - [Original corpus (CSQA)](#original-corpus-csqa)
140
+ - [CARTON](#carton)
141
+
142
+
143
+ ## Dataset Description
144
+
145
+ - **Paper:** [SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022)](https://aclanthology.org/2022.aacl-main.11/)
146
+ - **Point of Contact:** GwΓ©nolΓ© LecorvΓ©
147
+
148
+ ### Dataset Summary
149
+
150
+ CSQA corpus (Complex Sequential Question-Answering, see https://amritasaha1812.github.io/CSQA/) is a large corpus for conversational knowledge-based question answering. The version here is augmented with various fields to make it easier to run specific tasks, especially SPARQL-to-text conversion.
151
+
152
+ The original data has been post-processing as follows:
153
+
154
+ 1. Verbalization templates were applied on the answers and their entities were verbalized (replaced by their label in Wikidata)
155
+
156
+ 2. Questions were parsed using the CARTON algorithm to produce a sequence of action in a specific grammar
157
+
158
+ 3. Sequence of actions were mapped to SPARQL queries and entities were verbalized (replaced by their label in Wikidata)
159
+
160
+ ### Supported tasks
161
+
162
+ - Knowledge-based question-answering
163
+ - Text-to-SPARQL conversion
164
+
165
+ #### Knowledge based question-answering
166
+
167
+ Below is an example of dialogue:
168
+ - Q1: Which occupation is the profession of Edmond Yernaux ?
169
+ - A1: politician
170
+ - Q2: Which collectable has that occupation as its principal topic ?
171
+ - A2: Notitia Parliamentaria, An History of the Counties, etc.
172
+
173
+ #### SPARQL queries and natural language questions
174
+
175
+ ```SQL
176
+ SELECT DISTINCT ?x WHERE
177
+ { ?x rdf:type ontology:occupation . resource:Edmond_Yernaux property:occupation ?x }
178
+ ```
179
+
180
+ is equivalent to:
181
+
182
+ ```txt
183
+ Which occupation is the profession of Edmond Yernaux ?
184
+ ```
185
+
186
+ ### Languages
187
+
188
+ - English
189
+
190
+ ## Dataset Structure
191
+
192
+
193
+ The corpus follows the global architecture from the original version of CSQA (https://amritasaha1812.github.io/CSQA/).
194
+
195
+ There is one directory of the train, dev, and test sets, respectively.
196
+
197
+ Dialogues are stored in separate directories, 100 dialogues per directory.
198
+
199
+ Finally, each dialogue is stored in a JSON file as a list of turns.
200
+
201
+ ### Types of questions
202
+
203
+ Comparison of question types compared to related datasets:
204
+
205
+ | | | [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) | [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) | [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) | [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) | [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) |
206
+ |--------------------------|-----------------|:---------------:|:------:|:-----------:|:----:|:---------:|
207
+ | **Number of triplets in query** | 1 | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
208
+ | | 2 | | βœ“ | βœ“ | βœ“ | βœ“ |
209
+ | | More | | | βœ“ | βœ“ | βœ“ |
210
+ | **Logical connector between triplets** | Conjunction | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
211
+ | | Disjunction | | | | βœ“ | βœ“ |
212
+ | | Exclusion | | | | βœ“ | βœ“ |
213
+ | **Topology of the query graph** | Direct | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
214
+ | | Sibling | | βœ“ | βœ“ | βœ“ | βœ“ |
215
+ | | Chain | | βœ“ | βœ“ | βœ“ | βœ“ |
216
+ | | Mixed | | | βœ“ | | βœ“ |
217
+ | | Other | | βœ“ | βœ“ | βœ“ | βœ“ |
218
+ | **Variable typing in the query** | None | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
219
+ | | Target variable | | βœ“ | βœ“ | βœ“ | βœ“ |
220
+ | | Internal variable | | βœ“ | βœ“ | βœ“ | βœ“ |
221
+ | **Comparisons clauses** | None | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
222
+ | | String | | | βœ“ | | βœ“ |
223
+ | | Number | | | βœ“ | βœ“ | βœ“ |
224
+ | | Date | | | βœ“ | | βœ“ |
225
+ | **Superlative clauses** | No | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
226
+ | | Yes | | | | βœ“ | |
227
+ | **Answer type** | Entity (open) | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
228
+ | | Entity (closed) | | | | βœ“ | βœ“ |
229
+ | | Number | | | βœ“ | βœ“ | βœ“ |
230
+ | | Boolean | | βœ“ | βœ“ | βœ“ | βœ“ |
231
+ | **Answer cardinality** | 0 (unanswerable) | | | βœ“ | | βœ“ |
232
+ | | 1 | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
233
+ | | More | | βœ“ | βœ“ | βœ“ | βœ“ |
234
+ | **Number of target variables** | 0 (β‡’ ASK verb) | | βœ“ | βœ“ | βœ“ | βœ“ |
235
+ | | 1 | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
236
+ | | 2 | | | βœ“ | | βœ“ |
237
+ | **Dialogue context** | Self-sufficient | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
238
+ | | Coreference | | | | βœ“ | βœ“ |
239
+ | | Ellipsis | | | | βœ“ | βœ“ |
240
+ | **Meaning** | Meaningful | βœ“ | βœ“ | βœ“ | βœ“ | βœ“ |
241
+ | | Non-sense | | | | | βœ“ |
242
+
243
+
244
+
245
+ ### Data splits
246
+
247
+ Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.
248
+
249
+ | | Train | Validation | Test |
250
+ | --------------------- | ---------- | ---------- | ---------- |
251
+ | Questions | 1.5M | 167K | 260K |
252
+ | Dialogues | 152K | 17K | 28K |
253
+ | NL question per query | 1 |
254
+ | Characters per query | 163 (Β± 100) |
255
+ | Tokens per question | 10 (Β± 4) |
256
+
257
+ ### JSON fields
258
+
259
+ Each turn of a dialogue contains the following fields:
260
+
261
+ #### Original fields
262
+
263
+ * `ques_type_id`: ID corresponding to the question utterance
264
+
265
+ * `description`: Description of type of question
266
+
267
+ * `relations`: ID's of predicates used in the utterance
268
+
269
+ * `entities_in_utterance`: ID's of entities used in the question
270
+
271
+ * `speaker`: The nature of speaker: `SYSTEM` or `USER`
272
+
273
+ * `utterance`: The utterance: either the question, clarification or response
274
+
275
+ * `active_set`: A regular expression which identifies the entity set of answer list
276
+
277
+ * `all_entities`: List of ALL entities which constitute the answer of the question
278
+
279
+ * `question-type`: Type of question (broad types used for evaluation as given in the original authors' paper)
280
+
281
+ * `type_list`: List containing entity IDs of all entity parents used in the question
282
+
283
+ #### New fields
284
+
285
+ * `is_spurious`: introduced by CARTON,
286
+
287
+ * `is_incomplete`: either the question is self-sufficient (complete) or it relies on information given by the previous turns (incomplete)
288
+
289
+ * `parsed_active_set`:
290
+
291
+ * `gold_actions`: sequence of ACTIONs as returned by CARTON
292
+
293
+ * `sparql_query`: SPARQL query
294
+
295
+ #### Verbalized fields
296
+
297
+ Fields with `verbalized` in their name are verbalized versions of another fields, ie IDs were replaced by actual words/labels.
298
+
299
+ ### Format of the SPARQL queries
300
+
301
+ * Clauses are in random order
302
+
303
+ * Variables names are represented as random letters. The letters change from one turn to another.
304
+
305
+ * Delimiters are spaced
306
+
307
+
308
+ ## Additional Information
309
+
310
+ ### Licensing Information
311
+
312
+ * Content from original dataset: CC-BY-SA 4.0
313
+
314
+ * New content: CC BY-SA 4.0
315
+
316
+ ### Citation Information
317
+
318
+
319
+ #### This version of the corpus (with SPARQL queries)
320
+
321
+ ```bibtex
322
+ @inproceedings{lecorve2022sparql2text,
323
+ title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications},
324
+ author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.},
325
+ journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)},
326
+ year={2022}
327
+ }
328
+ ```
329
+
330
+ #### Original corpus (CSQA)
331
+
332
+ ```bibtex
333
+ @InProceedings{saha2018complex,
334
+ title = {Complex {Sequential} {Question} {Answering}: {Towards} {Learning} to {Converse} {Over} {Linked} {Question} {Answer} {Pairs} with a {Knowledge} {Graph}},
335
+ volume = {32},
336
+ issn = {2374-3468},
337
+ url = {https://ojs.aaai.org/index.php/AAAI/article/view/11332},
338
+ booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
339
+ author = {Saha, Amrita and Pahuja, Vardaan and Khapra, Mitesh and Sankaranarayanan, Karthik and Chandar, Sarath},
340
+ month = apr,
341
+ year = {2018}
342
+ }
343
+ ```
344
+
345
+ #### CARTON
346
+
347
+ ```bibtex
348
+ @InProceedings{plepi2021context,
349
+ author="Plepi, Joan and Kacupaj, Endri and Singh, Kuldeep and Thakkar, Harsh and Lehmann, Jens",
350
+ editor="Verborgh, Ruben and Hose, Katja and Paulheim, Heiko and Champin, Pierre-Antoine and Maleshkova, Maria and Corcho, Oscar and Ristoski, Petar and Alam, Mehwish",
351
+ title="Context Transformer with Stacked Pointer Networks for Conversational Question Answering over Knowledge Graphs",
352
+ booktitle="Proceedings of The Semantic Web",
353
+ year="2021",
354
+ publisher="Springer International Publishing",
355
+ pages="356--371",
356
+ isbn="978-3-030-77385-4"
357
+ }
358
+ ```