id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
260489da0fb3f7a201a6a1cce8f03b_4
<cite>Bahdanau et al. [2015]</cite> define the conditional probability in Eq. (1) as where g is a non-linear function and s n is the hidden state corresponding to the n-th target word computed by
background
260489da0fb3f7a201a6a1cce8f03b_5
We follow <cite>Bahdanau et al. [2015]</cite> to restrict that sentences are no longer than 50 words.
uses
260489da0fb3f7a201a6a1cce8f03b_6
GroundHog is an attention-based neural machine translation system<cite> [Bahdanau et al., 2015]</cite> .
background
260489da0fb3f7a201a6a1cce8f03b_7
We compared our approach with two state-of-the-art SMT and NMT systems: [Koehn and Hoang, 2007] . GroundHog is an attention-based neural machine translation system<cite> [Bahdanau et al., 2015]</cite> . 1. Moses [Koehn and Hoang, 2007] : a phrase-based SMT system; 2. GroundHog<cite> [Bahdanau et al., 2015]</cite> : an attention-based NMT system.
uses
260489da0fb3f7a201a6a1cce8f03b_8
GroundHog is an attention-based neural machine translation system<cite> [Bahdanau et al., 2015]</cite> . Our approach simply extends GroundHog by replacing independent training with agreement-based joint training.
extends background
260489da0fb3f7a201a6a1cce8f03b_9
<cite>Bahdanau et al. [2015]</cite> first introduce the attentional mechanism into neural machine translation to enable the decoder to focus on relevant parts of the source sentence during decoding.
background
260489da0fb3f7a201a6a1cce8f03b_10
After analyzing the alignment matrices generated by GroundHog<cite> [Bahdanau et al., 2015]</cite> , we find that modeling the structural divergence of natural languages is so challenging that unidirectional models can only capture part of alignment regularities. This finding inspires us to improve attention-based NMT by combining two unidirectional models.
motivation
260489da0fb3f7a201a6a1cce8f03b_11
After analyzing the alignment matrices generated by GroundHog<cite> [Bahdanau et al., 2015]</cite> , we find that modeling the structural divergence of natural languages is so challenging that unidirectional models can only capture part of alignment regularities.
motivation
2606ecb66287c0199f3aa6d95f6774_0
The progress in both fields has inspired researchers to build holistic architectures for challenging grounding [14, 15] , natural language generation from image/video [16, 17, 18] , image-to-sentence alignment [19, 20, 21, 22] , and recently presented question-answering problems [23, 24, 25, 26, <cite>27]</cite> .
background
2606ecb66287c0199f3aa6d95f6774_1
Third, if our aim is to mimic human response, we have to deal with inherent ambiguities due to human judgement that stem from issues like binding, reference frames, social conventions. For instance <cite>[27]</cite> reports that for a question answering task on real-world images even human answers are inconsistent.
motivation
2606ecb66287c0199f3aa6d95f6774_2
We exemplify some of our findings on the <cite>DAQUAR dataset</cite> <cite>[27]</cite> with the aim of demonstrating different challenges that are present in <cite>the dataset</cite>.
uses
2606ecb66287c0199f3aa6d95f6774_3
The quality of an answer depends on how ambiguous and latent notions of reference frames and intentions are understood <cite>[27</cite>, 44] .
background
2606ecb66287c0199f3aa6d95f6774_5
<cite>DAQUAR</cite> <cite>[27]</cite> is a challenging, large dataset for a question answering task based on real-world images.
background
2606ecb66287c0199f3aa6d95f6774_6
<cite>DAQUAR's</cite> language scope is beyond the nouns or tuples that are typical to recognition datasets [51, 52, 53] .
background
2606ecb66287c0199f3aa6d95f6774_7
In this section we discuss in isolation different challenges reflected in <cite>DAQUAR</cite>.
motivation
2606ecb66287c0199f3aa6d95f6774_8
The machine world in <cite>DAQUAR</cite> is represented as a set of images and questions about their content.
background
2606ecb66287c0199f3aa6d95f6774_9
<cite>DAQUAR</cite> contains 1088 different nouns in the question, 803 in the answers, and 1586 altogether (we use the Stanford POS Tagger [59] to extract the nouns from the questions).
background
2606ecb66287c0199f3aa6d95f6774_10
<cite>DAQUAR</cite> also contains other parts of speech where only colors and spatial prepositions are grounded in <cite>[27]</cite> .
background
2606ecb66287c0199f3aa6d95f6774_11
Moreover, ambiguities naturally emerge due to fine grained categories that exist in <cite>DAQUAR</cite>.
background
2606ecb66287c0199f3aa6d95f6774_12
<cite>DAQUAR</cite> includes various challenges related to natural language understanding.
motivation
2606ecb66287c0199f3aa6d95f6774_13
Common sense knowledge <cite>DAQUAR</cite> includes questions that can be reliably answered using common sense knowledge.
background
2606ecb66287c0199f3aa6d95f6774_14
To sum up, we believe that common sense knowledge is an interesting venue to explore with <cite>DAQUAR</cite>.
motivation
2606ecb66287c0199f3aa6d95f6774_15
Some authors [23, 24, <cite>27]</cite> treat the grounding (understood here as the logical representation of the meaning of the question) as a latent variable in the question answering task.
background
2606ecb66287c0199f3aa6d95f6774_16
We exemplify the aforementioned requirements by illustrating the WUPS scorean automatic metric that quantifies performance of the holistic architectures proposed by <cite>[27]</cite> .
uses
2606ecb66287c0199f3aa6d95f6774_17
The authors of <cite>[27]</cite> have proposed using WUP similarity [62] as the membership measure 碌 in the WUPS score. Such choice of 碌 suffers from the aforementioned coverage problem and the whole metric takes only one human interpretation of the question into account. Future directions for defining metrics Recent work provides several directions towards improving scores.
future_work
2606ecb66287c0199f3aa6d95f6774_18
We identify particular challenges that holistic tasks should exhibit and exemplify how they are manifested in <cite>a recent question answering challenge</cite> <cite>[27]</cite> . To judge competing architectures and measure the progress on the task, we suggest several directions to further improve existing metrics, and discuss different experimental scenarios.
future_work
264bdb348c13f167768fd859b047e8_0
<cite>Olabiyi et al. [7]</cite> tackle this problem by training a modified HRED generator alongside an adversarial discriminator in order to provide a stronger guarantee to the generator's output.
background
264bdb348c13f167768fd859b047e8_1
However, the HRED system suffers from lack of diversity and does not support any guarantees on the generator output since the output conditional probability is not calibrated. <cite>Olabiyi et al. [7]</cite> tackle this problem by training a modified HRED generator alongside an adversarial discriminator in order to provide a stronger guarantee to the generator's output.
background
264bdb348c13f167768fd859b047e8_2
However, the HRED system suffers from lack of diversity and does not support any guarantees on the generator output since the output conditional probability is not calibrated. <cite>Olabiyi et al. [7]</cite> tackle this problem by training a modified HRED generator alongside an adversarial discriminator in order to provide a stronger guarantee to the generator's output. While the hredGAN system improves upon response quality, it does not capture speaker and other attributes modalities within a dataset and fails to generate persona-specific responses in datasets with multiple modalities.
motivation background
264bdb348c13f167768fd859b047e8_3
<cite>Olabiyi et al. [7]</cite> tackle this problem by training a modified HRED generator alongside an adversarial discriminator in order to provide a stronger guarantee to the generator's output. While the hredGAN system improves upon response quality, it does not capture speaker and other attributes modalities within a dataset and fails to generate persona-specific responses in datasets with multiple modalities. At the same time, there has been some recent work on introducing persona into dialogue models.
background
264bdb348c13f167768fd859b047e8_4
To overcome these limitations, we propose phredGAN , a multi-modal hredGAN dialogue system which additionally conditions the adversarial framework proposed by <cite>Olabiyi et al. [7]</cite> on speaker and/or utterance attributes in order to maintain response quality of hredGAN and still capture speaker and other modalities within a conversation.
extends uses
264bdb348c13f167768fd859b047e8_5
We train and sample the proposed phredGAN similar to the procedure for hredGAN <cite>[7]</cite> .
uses similarities
264bdb348c13f167768fd859b047e8_6
We train and sample the proposed phredGAN similar to the procedure for hredGAN <cite>[7]</cite> . We demonstrate system superiority over hredGAN and the state-of-the-art persona conversational model in terms
extends differences
264bdb348c13f167768fd859b047e8_7
We demonstrate system superiority over hredGAN and the state-of-the-art persona conversational model in terms
differences
264bdb348c13f167768fd859b047e8_8
The hredGAN proposed by Olabiyi et. al <cite>[7]</cite> contains three major components.
background
264bdb348c13f167768fd859b047e8_9
In the case of hredGAN <cite>[7]</cite> , it is a bidirectional RNN that discriminates at the word level to capture both the syntactic and semantic difference between the ground truth and the generator output.
background
264bdb348c13f167768fd859b047e8_10
Problem Formulation: The hredGAN <cite>[7]</cite> formulates multi-turn dialogue response generation as: given a dialogue history of sequence of utterances, where T i is the number of generated tokens. The framework uses a conditional GAN structure to learn a mapping from an observed dialogue history to a sequence of output tokens.
background
264bdb348c13f167768fd859b047e8_11
The generator, G, is trained to produce sequences that cannot be distinguished from the ground truth by an adversarially trained discriminator, D, akin to a two-player min-max optimization problem. The generator is also trained to minimize the cross-entropy loss L MLE (G) between the ground truth X i+1 , and the generator output Y i . The following objective summarizes both goals: where 位 G and 位 M are hyperparameters and L cGAN (G, D) and L MLE (G) are defined in Eqs. (5) and (7) of <cite>[7]</cite> respectively.
background
264bdb348c13f167768fd859b047e8_12
The proposed architecture of phredGAN is very similar to that of hredGAN summarized above.
similarities
264bdb348c13f167768fd859b047e8_13
The proposed architecture of phredGAN is very similar to that of hredGAN summarized above. The only difference is that the dialogue history is now X i = (X 1 , C 1 ), (X 2 , C 2 ), 路 路 路 , (X i , C i ) where C i is additional input that represents the speaker and/or utterance attributes.
extends differences
264bdb348c13f167768fd859b047e8_14
Discriminator: In addition to the D RN N in the discriminator of hredGAN , if the attribute C i+1 is a sequence of tokens, then the same tattRN N is used to summarize the attribute token embeddings; otherwise the single attribute embedding is concatenated with the other inputs of D RN N in Fig. 1 of <cite>[7]</cite> .
background
264bdb348c13f167768fd859b047e8_15
The modified system is as follows: Discriminator: In addition to the D RN N in the discriminator of hredGAN , if the attribute C i+1 is a sequence of tokens, then the same tattRN N is used to summarize the attribute token embeddings; otherwise the single attribute embedding is concatenated with the other inputs of D RN N in Fig. 1 of <cite>[7]</cite> .
uses
264bdb348c13f167768fd859b047e8_16
Noise Injection: Although <cite>[7]</cite> demonstrated that injecting noise at the word level seems to perform better than at the utterance level for hredGAN , we found that this is datasetdependent for phredGAN .
differences
264bdb348c13f167768fd859b047e8_17
The modified system is as follows: Noise Injection: Although <cite>[7]</cite> demonstrated that injecting noise at the word level seems to perform better than at the utterance level for hredGAN , we found that this is datasetdependent for phredGAN .
extends
264bdb348c13f167768fd859b047e8_18
We train both the generator and the discriminator (with a shared encoder) of phredGAN using the same training procedure in Algorithm 1 with 位 G = 位 M = 1 <cite>[7]</cite> .
uses
264bdb348c13f167768fd859b047e8_19
The number of attributes, V c is dataset-dependent Compute the generator output similar to Eq. (11) in <cite>[7]</cite> .
similarities uses
264bdb348c13f167768fd859b047e8_20
The parameter update is conditioned on the discriminator accuracy performance as in <cite>[7]</cite> with acc D th = 0.99 and acc G th = 0.75.
uses
264bdb348c13f167768fd859b047e8_21
For the modified noise sample, we perform a linear search for 伪 with sample size L = 1 based on the average discriminator loss, 鈭抣ogD(G(.)) <cite>[7]</cite> using trained models run in autoregressive mode to reflect performance in actual deployment.
uses
264bdb348c13f167768fd859b047e8_22
In this section, we explore phredGAN 's results on two conversational datasets and compare its performance to the persona system in Li et al. [8] and hredGAN <cite>[7]</cite> in terms of quantitative and qualitative measures.
similarities differences
264bdb348c13f167768fd859b047e8_23
We follow the same training, development, and test split as the UDC dataset in <cite>[7]</cite> , with 90%, 5%, and 5% proportions, respectively.
uses
264bdb348c13f167768fd859b047e8_24
We use similar evaluation metrics as in <cite>[7]</cite> including perplexity, BLEU [15] , ROUGE [16] , and distinct n-gram [17] scores.
uses
264bdb348c13f167768fd859b047e8_25
We also compare our system to hredGAN from <cite>[7]</cite> in terms of perplexity, ROGUE, and distinct n-grams scores.
similarities differences
264bdb348c13f167768fd859b047e8_26
In <cite>[7]</cite> , the authors recommend the version with word-level noise injection, hredGAN w , so we use this version in our comparison.
uses
264bdb348c13f167768fd859b047e8_27
Also for fair comparison, we use the same UDC dataset as reported in <cite>[7]</cite> .
uses
26b00c6e5b499eea30e9cef0bbaf9f_0
Though Baroni et al. (2014) suggested that predictive models which use neural networks to generate the distributed word representations (also known as embeddings in this context) outperform counting models which work on co-occurrence matrices, recent work shows evidence to the contrary (Levy et al., 2014;<cite> Salle et al., 2016)</cite> .
motivation
26b00c6e5b499eea30e9cef0bbaf9f_1
In this paper, we focus on improving a state-ofthe-art counting model, LexVec <cite>(Salle et al., 2016)</cite> , which performs factorization of the positive pointwise mutual information (PPMI) matrix using window sampling and negative sampling (WSNS).
uses
26b00c6e5b499eea30e9cef0bbaf9f_2
Though Baroni et al. (2014) suggested that predictive models which use neural networks to generate the distributed word representations (also known as embeddings in this context) outperform counting models which work on co-occurrence matrices, recent work shows evidence to the contrary (Levy et al., 2014;<cite> Salle et al., 2016)</cite> . In this paper, we focus on improving a state-ofthe-art counting model, LexVec <cite>(Salle et al., 2016)</cite> , which performs factorization of the positive pointwise mutual information (PPMI) matrix using window sampling and negative sampling (WSNS).
motivation
26b00c6e5b499eea30e9cef0bbaf9f_3
P n is the distribution used for drawing negative samples, chosen to be with 伪 = 3/4 (Mikolov et al., 2013b;<cite> Salle et al., 2016)</cite> , and #(w) the unigram frequency of w. Two methods were defined for the minimization of eqs. (2) and (3): Mini-batch and Stochastic <cite>(Salle et al., 2016)</cite> .
uses
26b00c6e5b499eea30e9cef0bbaf9f_4
with 伪 = 3/4 (Mikolov et al., 2013b;<cite> Salle et al., 2016)</cite> , and #(w) the unigram frequency of w. Two methods were defined for the minimization of eqs. (2) and (3): Mini-batch and Stochastic <cite>(Salle et al., 2016)</cite> . Since the latter is more computationally efficient and yields equivalent results, we adopt it in this paper.
uses
26b00c6e5b499eea30e9cef0bbaf9f_5
As suggested by Levy et al. (2015) and<cite> Salle et al. (2016)</cite> , positional contexts (introduced in Levy et al. (2014) ) are a potential solution to poor performance on syntactic analogy tasks.
background motivation
26b00c6e5b499eea30e9cef0bbaf9f_6
We report results from<cite> Salle et al. (2016)</cite> and use the same training corpus and parameters to train LexVec with positional contexts and external memory.
uses
26b00c6e5b499eea30e9cef0bbaf9f_7
As recommended in Levy et al. (2015) and used in<cite> Salle et al. (2016)</cite> , the PPMI matrix used in all LexVec models and in PPMI-SVD is transformed using context distribution smoothing exponentiating context frequencies to the power 0.75.
uses
26b00c6e5b499eea30e9cef0bbaf9f_8
Therefore, we perform the exact same evaluation as<cite> Salle et al. (2016)</cite> , namely the WS-353 Similarity (WSim) and Relatedness (WRel) (Finkelstein et al., 2001) , MEN (Bruni et al., 2012) , MTurk (Radinsky et al., 2011) , RW (Luong et al., 2013) , SimLex-999 (Hill et al., 2015) , MC (Miller and Charles, 1991) , RG (Rubenstein and Goodenough, 1965) , and SCWS (Huang et al., 2012) word similarity tasks 1 , and the Google semantic (GSem) and syntactic (GSyn) analogy (Mikolov et al., 2013a) and MSR syntactic analogy dataset (Mikolov et al., 2013c) tasks.
uses
26fbf9f4ae740513d8889160ad9f63_0
Several work have shown that discourse relations can improve the results of summarization in the case of factual texts or news articles (e.g.<cite> (Otterbacher et al., 2002)</cite> ).
background
26fbf9f4ae740513d8889160ad9f63_1
In particular,<cite> (Otterbacher et al., 2002)</cite> experimentally showed that discourse relations can improve the coherence of multi-document summaries.
background
26fbf9f4ae740513d8889160ad9f63_2
The comparison, contingency, and illustration relations are also considered by most of the work in the field of discourse analysis such as the PDTB: Penn Discourse TreeBank research group <cite>(Prasad et al., 2008)</cite> and the RST Discourse Treebank research group (Carlson and Marcu, 2001 ).
background
26fbf9f4ae740513d8889160ad9f63_3
From our corpus analysis, we have identified the six most prevalent discourse relations in this blog dataset, namely comparison, contingency, illustration, attribution, topic-opinion, and attributive. The comparison, contingency, and illustration relations are also considered by most of the work in the field of discourse analysis such as the PDTB: Penn Discourse TreeBank research group <cite>(Prasad et al., 2008)</cite> and the RST Discourse Treebank research group (Carlson and Marcu, 2001 ).
similarities
26fbf9f4ae740513d8889160ad9f63_4
The comparison, contingency, and illustration relations are also considered by most of the work in the field of discourse analysis such as the PDTB: Penn Discourse TreeBank research group <cite>(Prasad et al., 2008)</cite> and the RST Discourse Treebank research group (Carlson and Marcu, 2001 ). We considered three additional classes of relations: attributive, attribution, and topic-opinion.
extends differences
26fbf9f4ae740513d8889160ad9f63_5
For example: "Allied Capital is a closed-end management investment company that will operate as a business development concern." As shown in Figure 1 , illustration relations can be sub-divided into sub-categories: joint, list, disjoint, and elaboration relations according to the RST Discourse Treebank (Carlson and Marcu, 2001 ) and the Penn Discourse TreeBank <cite>(Prasad et al., 2008)</cite> .
background
26fbf9f4ae740513d8889160ad9f63_6
As shown in Figure 1 , the contingency relation subsumes several more specific relations: explanation, evidence, reason, cause, result, consequence, background, condition, hypothetical, enablement, and purpose relations according to the Penn Discourse TreeBank <cite>(Prasad et al., 2008)</cite> .
background
26fbf9f4ae740513d8889160ad9f63_7
The comparison relation subsumes the contrast relation according to the Penn Discourse TreeBank <cite>(Prasad et al., 2008)</cite> and the analogy and preference relations according to the RST Discourse Treebank (Carlson and Marcu, 2001) .
background
26fbf9f4ae740513d8889160ad9f63_8
However, we have complemented this parser with three other approaches: (Jindal and Liu, 2006 )'s approach is used to identify intra-sentence comparison relations; we have designed a tagger based on (Fei et al., 2008) 's approach to identify topic-opinion relations; and we have proposed a new approach to tag attributive relations<cite> (Mithun, 2012)</cite> .
extends uses
26fbf9f4ae740513d8889160ad9f63_9
A description and evaluation of these approaches can be found in<cite> (Mithun, 2012)</cite> .
background
26fbf9f4ae740513d8889160ad9f63_10
However, we have complemented this parser with three other approaches: (Jindal and Liu, 2006 )'s approach is used to identify intra-sentence comparison relations; we have designed a tagger based on (Fei et al., 2008) 's approach to identify topic-opinion relations; and we have proposed a new approach to tag attributive relations<cite> (Mithun, 2012)</cite> . A description and evaluation of these approaches can be found in<cite> (Mithun, 2012)</cite> .
uses
26fbf9f4ae740513d8889160ad9f63_11
To measure the usefulness of discourse relations for the summarization of informal texts, we have tested the effect of each relation with four different summarizers: BlogSum<cite> (Mithun, 2012)</cite> , MEAD<cite> (Radev et al., 2004)</cite> , the best scoring system at TAC 2008 5 and the best scoring system at DUC 2007 6 .
uses
26fbf9f4ae740513d8889160ad9f63_12
To measure the usefulness of discourse relations for the summarization of informal texts, we have tested the effect of each relation with four different summarizers: BlogSum<cite> (Mithun, 2012)</cite> , MEAD<cite> (Radev et al., 2004)</cite> , the best scoring system at TAC 2008 5 and the best scoring system at DUC 2007 6 . Finally the most appropriate schema is selected based on a given question type; and candidate sentences fill particular slots in the selected schema based on which discourse relations they contain in order to create the final summary (details of BlogSum can be found in<cite> (Mithun, 2012)</cite> ).
uses background
26fbf9f4ae740513d8889160ad9f63_13
Finally the most appropriate schema is selected based on a given question type; and candidate sentences fill particular slots in the selected schema based on which discourse relations they contain in order to create the final summary (details of BlogSum can be found in<cite> (Mithun, 2012)</cite> ).
background
26fbf9f4ae740513d8889160ad9f63_14
To ensure that the results were not specific to our summarizer, we performed the same experiments with two other systems: the MEAD summarizer<cite> (Radev et al., 2004)</cite> , a publicly available and a widely used summarizer, and with the output of the TAC best-scoring system.
uses
27dbdd4827554df0f53013966242dc_0
Our work is based on the SummaRuNNer model <cite>[5]</cite> . It consists of a two-layer bi-directional Gated Recurrent Unit (GRU) Recurrent Neural Network (RNN) which treats the summarization problem as a binary sequence classification problem, where each sentence is classified sequentially as sentence to be included or not in the summary. However, we introduced two modifications to the original SummaRuNNer architecture, leading to better results while reducing complexity: arXiv:1911.06121v1 [cs.CL] 13 Nov 2019 Fig. 1 .
extends
27dbdd4827554df0f53013966242dc_2
In contrast to <cite>[5]</cite> , we trained our model only on CNN articles from the CNN/Daily Mail corpus [2] .
differences
27dbdd4827554df0f53013966242dc_3
In a similar approach to <cite>[5]</cite> , we calculated the ROUGE-1 F1 score between each sentence and its article's abstractive summary.
similarities
27ee0fbed3a88854ebe945dfffefd8_0
A review of the methods in the article <cite>[35]</cite> about the recognition of timexes for English and Spanish has shown a certain shift within the most popular solutions.
uses
27ee0fbed3a88854ebe945dfffefd8_1
The best systems listed in <cite>[35]</cite> , called TIPSem [16] and ClearTK [1] , use CRFs for recognition, so initially, we decided to apply the CRF-based approach for this task.
uses
27ee0fbed3a88854ebe945dfffefd8_2
Experiments were carried out by the method proposed in <cite>[35]</cite> .
uses
27ee0fbed3a88854ebe945dfffefd8_3
Then we evaluated these results using more detailed measures for timexes, presented in <cite>[35]</cite> .
uses
27ee0fbed3a88854ebe945dfffefd8_5
If there was an overlap, a relaxed type F1-score (Type.F1) was calculated <cite>[35]</cite> .
uses
27ee0fbed3a88854ebe945dfffefd8_6
Then we evaluated these results using more detailed measures for timexes, presented in <cite>[35]</cite> . F1) evaluation has also been carried out to determine whether there is an overlap between the system entity and gold entity, e.g. [Sunday] and [Sunday morning] <cite>[35]</cite> .
uses
27ee0fbed3a88854ebe945dfffefd8_7
Table 9 : Evaluation results for all TIMEX3 classes (total) for 9 word embeddings models (3 best models from each embeddings group: EE, EP, EC from Table 8 ) using the following measures from <cite>[35]</cite> : strict precision, strict recall, strict F1-score, relaxed precision, relaxed recall, relaxed F1-score, type F1-score.
uses
28038a4fa4182ccdc6134f2138c0da_0
The task of definition modeling, introduced by <cite>Noraset et al. (2017)</cite> , consists in generating the dictionary definition of a specific word: for instance, given the word "monotreme" as input, the system would need to produce a definition such as "any of an order (Monotremata) of egg-laying mammals comprising the platypuses and echidnas".
background
28038a4fa4182ccdc6134f2138c0da_1
A major intended application of definition modeling is the explication and evaluation of distributed lexical representations, also known as word embeddings<cite> (Noraset et al., 2017)</cite> .
background
28038a4fa4182ccdc6134f2138c0da_2
In their seminal work on definition modeling, <cite>Noraset et al. (2017)</cite> likened systems generating definitions to language models, which can naturally be used to generate arbitrary text.
background
28038a4fa4182ccdc6134f2138c0da_3
This reformulation can appear contrary to the original proposal by <cite>Noraset et al. (2017)</cite> , which conceived definition modeling as a "word-tosequence task".
background
28038a4fa4182ccdc6134f2138c0da_4
Though different kinds of linguistic contexts have been suggested throughout the literature, we remark here that sentential context may sometimes suffice to guess the meaning of a word that we don't know (Lazaridou et al., 2017) . Quoting from the example above, the context "enough around-let's get back to work!" sufficiently characterizes the meaning of the omitted verb to allow for an approximate definition for it even if the blank is not filled (Taylor, 1953; Devlin et al., 2018) . This reformulation can appear contrary to the original proposal by <cite>Noraset et al. (2017)</cite> , which conceived definition modeling as a "word-tosequence task".
differences
28038a4fa4182ccdc6134f2138c0da_5
Despite some key differences, all of the previously proposed architectures we are aware of<cite> (Noraset et al., 2017</cite>; Gadetsky et al., 2018; followed a pattern similar to sequence-to-sequence models.
similarities
28038a4fa4182ccdc6134f2138c0da_6
In the case of <cite>Noraset et al. (2017)</cite> , the encoding was the concatenation of the embedding of the definiendum, a vector representation of its sequence of characters derived from a characterlevel CNN, and its "hypernym embedding".
background
28038a4fa4182ccdc6134f2138c0da_7
Should we mark the definiendum before encoding, then only the definiendum embedding is passed into the encoder: the resulting system provides out-of-context definitions, like in <cite>Noraset et al. (2017)</cite> where the definition is not linked to the context of a word but to its definiendum only.
similarities
28038a4fa4182ccdc6134f2138c0da_8
4 The dropout rate and warmup steps number were set using a hyperparameter search on the dataset from <cite>Noraset et al. (2017)</cite> , during which encoder and decoder vocabulary were merged for computational simplicity and models stopped after 12,000 steps.
uses
28038a4fa4182ccdc6134f2138c0da_9
The dataset of <cite>Noraset et al. (2017)</cite> (henceforth D Nor ) maps definienda to their respective definientia, as well as additional information not used here.
differences
28038a4fa4182ccdc6134f2138c0da_10
We train our models on three distinct datasets, which are all borrowed or adapted from previous works on definition modeling. The dataset of <cite>Noraset et al. (2017)</cite> (henceforth D Nor ) maps definienda to their respective definientia, as well as additional information not used here.
uses
28038a4fa4182ccdc6134f2138c0da_11
Perplexity measures for <cite>Noraset et al. (2017)</cite> and Gadetsky et al. (2018) are taken from the authors' respective publications.
background