id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
28038a4fa4182ccdc6134f2138c0da_12
Perplexity measures for <cite>Noraset et al. (2017)</cite> and Gadetsky et al. (2018) are taken from the authors' respective publications. All our models perform better than previous proposals, by a margin of 4 to 10 points, for a relative improvement of 11-23%.
differences
28038a4fa4182ccdc6134f2138c0da_13
A manual analysis of definitions produced by our system reveals issues similar to those discussed by <cite>Noraset et al. (2017)</cite> , namely selfreference, 7 POS-mismatches, over-and underspecificity, antonymy, and incoherence.
similarities
28038a4fa4182ccdc6134f2138c0da_14
As for POS-mismatches, we do note that the work of <cite>Noraset et al. (2017)</cite> had a much lower rate of 4.29%: we suggest that this may be due to the fact that they employ a learned character-level convolutional network, which arguably would be able to capture orthography and rudiments of morphology.
differences
291a6ac3f0c2d27ca69ee8f5f266f5_0
This paper proposes an expansion of set of primitive constraints available within the Primitive Optimality Theory framework <cite>(Eisner, 1997a)</cite> .
uses
291a6ac3f0c2d27ca69ee8f5f266f5_1
This paper proposes an expansion of set of primitive constraints available within the Primitive Optimality Theory framework <cite>(Eisner, 1997a)</cite> . This expansion consists of the addition of a new family of constraints--existential implicational constraints, which allow the specification of faithfulness constraints that can be satisfied at a distance--and the definition of two ways to combine simple constraints into com: plex constraints, that is, constraint disjunction (Crowhurst and Hewitt, 1995) and local constraint conjunction (Smolensky, 1995) .
extends
291a6ac3f0c2d27ca69ee8f5f266f5_2
Primitive Optimality Theory (OTP) <cite>(Eisner, 1997a)</cite> , and extensions to it (e.g., Albro (1998) ), can be useful as a formal system in which phonological analyses can be implemented and evaluated.
background
291a6ac3f0c2d27ca69ee8f5f266f5_3
Primitive Optimality Theory (OTP) <cite>(Eisner, 1997a)</cite> , and extensions to it (e.g., Albro (1998) ), can be useful as a formal system in which phonological analyses can be implemented and evaluated. However, for certain types of constraints, translation into the primitives of OTP (Eisner (1997b) ) can only be accomplished by adding to the grammar a number of ad hoc phonological tiers.
motivation
291a6ac3f0c2d27ca69ee8f5f266f5_4
This paper looks at three types of constraints employed throughout the Optimality Theoretic literature that cannot be translated in to the 1The computation time for an Optimality Theoretic derivation within the implementation of Albro (1998) increases exponentially with the number of tiers. The same is true for the implementation described in<cite> Eisner (1997a)</cite> , although a proposal is given there for a method that might improve the situation.
uses motivation
291a6ac3f0c2d27ca69ee8f5f266f5_5
primitives of OTP without reference to ad hoc tiers, and proposes a formalization of these constraints that is compatible with the finite state model described in<cite> Eisner (1997a)</cite> and Albro (1998) .
background
291a6ac3f0c2d27ca69ee8f5f266f5_6
2 Existential Implication 2.1 Motivation OWP as described in<cite> Eisner (1997a)</cite> provides some support for correspondence constraints (input-output only).
background
291a6ac3f0c2d27ca69ee8f5f266f5_7
Using the FST notation of<cite> Eisner (1997a)</cite> , the implementation for this constraint would be the following FST:
uses
29294f2ed3cc2772ca57fd4294274c_0
<cite>Leuski et al. (2006)</cite> developed algorithms for training such characters using linked questions and responses in the form of unstructured natural language text.
background
29294f2ed3cc2772ca57fd4294274c_1
These algorithms have been incorporated into a tool which has been used to create characters for a variety of applications (e.g.<cite> Leuski et al., 2006</cite>; Artstein et al., 2009; Swartout et al., 2010) .
background
29294f2ed3cc2772ca57fd4294274c_2
We reimplemented parts of the response ranking algorithms of <cite>Leuski et al. (2006)</cite> , including both the language modeling (LM) and cross-language modeling (CLM) approaches.
extends differences
29294f2ed3cc2772ca57fd4294274c_3
We did not implement the parameter learning of <cite>Leuski et al. (2006)</cite> ; instead we use a constant smoothing parameter λ π = λ φ = 0.1.
differences
29294f2ed3cc2772ca57fd4294274c_4
We also do not use the response threshold parameter, which <cite>Leuski et al. (2006)</cite> use to determine whether the top-ranked response is good enough.
differences
29294f2ed3cc2772ca57fd4294274c_5
This measure does not take into account non-understanding, that is the classifier's determination that the best response is not good enough<cite> (Leuski et al., 2006)</cite> , since this capability was not implemented; however, since all of our test questions are known to have at least one appropriate response, any non-understanding of a question would necessarily count against accuracy anyway.
differences background
29294f2ed3cc2772ca57fd4294274c_6
The LM approach almost invariably produced better results than the CLM approach; this is the opposite of the findings of <cite>Leuski et al. (2006)</cite> , where CLM fared consistently better.
differences
29294f2ed3cc2772ca57fd4294274c_7
In our experiments the LM approach consistently outperforms the CLM approach, contra <cite>Leuski et al. (2006)</cite> .
differences
2a01f96893f9c0630a01ecce320184_0
Several research works have been proposed to detect propaganda on document-level (Rashkin et al., 2017; Barrón-Cedeño et al., 2019b) , sentencelevel and fragment-level <cite>(Da San Martino et al., 2019)</cite> .
background
2a01f96893f9c0630a01ecce320184_1
Although Da San<cite> Martino et al. (2019)</cite> indicates that multi-task learning of both the SLC and the FLC could be beneficial for the SLC, in this paper, we only focus on the SLC task so as to better investigate whether context information could improve the performance of our system.
differences
2a01f96893f9c0630a01ecce320184_2
A fine-grained propaganda corpus was proposed in Da San<cite> Martino et al. (2019)</cite> which includes both sentencelevel and fragment-level information.
background
2a01f96893f9c0630a01ecce320184_3
More details of the dataset could be found in Da San<cite> Martino et al. (2019)</cite> .
background
2a01f96893f9c0630a01ecce320184_4
As described in Da San<cite> Martino et al. (2019)</cite> , the source of the dataset that we use is news articles, and since the title is usually the summarization of a news article, we use the title as supplementary information.
uses background
2a01f96893f9c0630a01ecce320184_5
In the future, we plan to apply multi-task learning to this context-dependent BERT, similar to the method mentioned in Da San<cite> Martino et al. (2019)</cite> or introducing other kinds of tasks, such as sentiment analysis or domain classification.
similarities future_work
2a84615479af66bbf875517a3a753b_0
In our previous work <cite>[7]</cite> , we applied a dual RNN in order to obtain a richer representation by blending the content and acoustic knowledge.
background
2a84615479af66bbf875517a3a753b_1
In our previous work <cite>[7]</cite> , we applied a dual RNN in order to obtain a richer representation by blending the content and acoustic knowledge. In this paper, we improve upon our earlier work by incorporating an attention mechanism in the emotion recognition framework.
extends
2a84615479af66bbf875517a3a753b_2
Recently,<cite> [7,</cite> 18] combined acoustic information and conversation transcripts using a neural network-based model to improve emotion classification accuracy.
background
2a84615479af66bbf875517a3a753b_3
Recently,<cite> [7,</cite> 18] combined acoustic information and conversation transcripts using a neural network-based model to improve emotion classification accuracy. However, none of these studies utilized attention method over audio and text modality in tandem for contextual understanding of the emotion in audio recording.
background motivation
2a84615479af66bbf875517a3a753b_4
Motivated by the architecture used in<cite> [7,</cite> 17, 19] , we train a recurrent encoder to predict the categorical class of a given audio signal.
motivation
2a84615479af66bbf875517a3a753b_5
To follow previous research <cite>[7]</cite> , we also add another prosodic feature vector, p, with each ot to generate a more informative vector representation of the signal, o A t .
uses
2a84615479af66bbf875517a3a753b_6
Previous research used multi-modal information independently using neural network model by concatenating features from each modality<cite> [7,</cite> 21] .
background
2a84615479af66bbf875517a3a753b_7
Previous research used multi-modal information independently using neural network model by concatenating features from each modality<cite> [7,</cite> 21] . As opposed to this approach, we propose a neural network architecture that exploits information in each modality by extracting relevant segments of the speech data using information from the lexical content (and vice-versa).
differences
2a84615479af66bbf875517a3a753b_8
For consistent comparison with previous works<cite> [7,</cite> 18] , all utterances labeled "excitement" are merged with those labeled "happiness".
uses
2a84615479af66bbf875517a3a753b_9
As this research is extended work from previous research <cite>[7]</cite> , we use the same feature extraction method as done in our previous work.
extends
2a84615479af66bbf875517a3a753b_10
We use the same dataset and features as other researchers<cite> [7,</cite> 18] .
uses
2a84615479af66bbf875517a3a753b_11
In audio-BRE (Fig. 2(a) ), most of the emotion labels are frequently misclassified as neutral class, supporting the claims of<cite> [7,</cite> 25] .
similarities
2b10893f03b4f5eaac0fe06b4d6115_0
In order to compare the performance of our system with others, we also used the dataset of<cite> Tu and Roth (2012)</cite> , which contains 1,348 sentences taken from different parts of the British National Corpus.
uses
2b10893f03b4f5eaac0fe06b4d6115_1
One example is<cite> Tu and Roth (2012)</cite> , where the authors examined a verbparticle combination only if the verbal components were formed with one of the previously given six verbs (i.e. make, take, have, give, do, get).
background
2b10893f03b4f5eaac0fe06b4d6115_2
As Table 3 shows, the six verbs used by<cite> Tu and Roth (2012)</cite> are responsible for only 50 VPCs on the Wiki50 corpus, so it covers only 11.16% of all gold standard VPCs.
background
2b10893f03b4f5eaac0fe06b4d6115_3
Furthermore, 127 different verbal component occurred in Wiki50, but the verbs have and do -which are used by<cite> Tu and Roth (2012)</cite> -do not appear in the corpus as verbal component of VPCs.
background
2b10893f03b4f5eaac0fe06b4d6115_4
Moreover, Support Vector Machines (SVM) (Cortes and Vapnik, 1995) results are also reported to compare the performance of our methods with that of<cite> Tu and Roth (2012)</cite> .
uses
2b10893f03b4f5eaac0fe06b4d6115_5
As<cite> Tu and Roth (2012)</cite> presented only the accuracy scores on the Tu & Roth dataset, we also employed an accuracy score as an evaluation metric on this dataset, where positive and negative examples were also marked.
similarities
2b10893f03b4f5eaac0fe06b4d6115_6
We also compared our results with the rule-based results available for Wiki50 and also with the 5-fold cross validation results of<cite> Tu and Roth (2012)</cite> .
uses
2b10893f03b4f5eaac0fe06b4d6115_7
In order to compare the performance of our system with others, we evaluated it on the Tu&Roth dataset <cite>(Tu and Roth, 2012)</cite> .
uses
2b10893f03b4f5eaac0fe06b4d6115_8
over, it also lists the results of<cite> Tu and Roth (2012)</cite> and the VPCTagger evaluated in the 5-fold cross validation manner, as<cite> Tu and Roth (2012)</cite> applied this evaluation schema.
uses
2b10893f03b4f5eaac0fe06b4d6115_9
Moreover, the results obtained with our machine learning approach on the Tu&Roth dataset outperformed those reported in<cite> Tu and Roth (2012)</cite> .
differences
2b10893f03b4f5eaac0fe06b4d6115_10
A striking difference between the Tu & Roth database and Wiki50 is that while<cite> Tu and Roth (2012)</cite> included the verbs do and have in their data, they do not occur at all among the VPCs collected from Wiki50.
background
2b10893f03b4f5eaac0fe06b4d6115_11
Our method yielded better results than those got using the dependency parsers on the Wiki50 corpus and the method reported in <cite>(Tu and Roth, 2012)</cite> on the Tu&Roth dataset.
differences
2b148e376c39eae7f674610118e588_0
In this paper, we consider the referential games of <cite>Lazaridou et al. (2017)</cite> , and investigate the representations the agents develop during their evolving interaction.
motivation
2b148e376c39eae7f674610118e588_1
Unlike earlier work (e.g., Briscoe, 2002; Cangelosi and Parisi, 2002; Steels, 2012) , many recent simulations consider realistic visual input, for example, by playing referential games with real-life pictures (e.g., Jorge et al., 2016; <cite>Lazaridou et al., 2017</cite>; Havrylov and Titov, 2017; Lee et al., 2018; Evtimova et al., 2018) . This setup allows us to address the exciting issue of whether the needs of goal-directed communication will lead agents to associate visually-grounded conceptual representations to discrete symbols, developing naturallanguage-like word meanings.
motivation background
2b148e376c39eae7f674610118e588_2
We study here agent representations following the model and setup of <cite>Lazaridou et al. (2017)</cite> .
motivation
2b148e376c39eae7f674610118e588_3
In their first game, <cite>Lazaridou</cite>'s Sender and Receiver are exposed to the same pair of images, one of them being randomly marked as the "target".
background
2b148e376c39eae7f674610118e588_4
Since an analysis of vocabulary usage brings inconclusive evidence that the agents are using the symbols to represent natural concepts (such as beaver or bayonet), <cite>Lazaridou and colleagues</cite> next modify the game, by presenting to the Sender and the Receiver different images for each of the two concepts (e.g., the Sender must now signal that the target is a beaver, while seeing a different beaver from the one shown to the Receiver).
background
2b148e376c39eae7f674610118e588_5
<cite>Lazaridou and colleagues</cite> present preliminary evidence suggesting that, indeed, agents are now developing conceptual symbol meanings.
background
2b148e376c39eae7f674610118e588_6
We replicate <cite>Lazaridou</cite>'s games, and we find that, in both, the agents develop successfully aligned representations that, however, are not capturing conceptual properties at all.
uses motivation
2b148e376c39eae7f674610118e588_7
Architecture We re-implement <cite>Lazaridou</cite>'s Sender and Receiver architectures (using their better-behaved "informed" Sender).
uses
2b148e376c39eae7f674610118e588_8
See <cite>Lazaridou et al. (2017</cite>) for details.
background
2b148e376c39eae7f674610118e588_9
Data Following <cite>Lazaridou et al. (2017)</cite> , for each of the 463 concepts <cite>they</cite> used, we randomly sample 100 images from ImageNet (Deng et al., 2009 ).
uses similarities
2b148e376c39eae7f674610118e588_10
Following <cite>Lazaridou</cite>, the images are passed through a pre-trained VGG ConvNet (Simonyan and Zisserman, 2015) .
similarities uses
2b148e376c39eae7f674610118e588_11
Games We re-implement both <cite>Lazaridou</cite>'s same-image game, where Sender and Receiver are shown the same two images (always of different concepts), and their different-image game, where the Receiver sees different images than the Sender's.
uses
2b148e376c39eae7f674610118e588_12
As we faithfully reproduced the setup of <cite>Lazaridou et al. (2017)</cite> , we refer the reader there for hyper-parameters and training details.
similarities background
2b148e376c39eae7f674610118e588_13
<cite>Lazaridou et al. (2017)</cite> designed <cite>their</cite> second game to encourage more general, concept-like referents. Unfortunately, we replicate the anomalies above in the different-image setup, although to a less marked extent.
background similarities
2b148e376c39eae7f674610118e588_14
However, the important contribution of <cite>Lazaridou et al. (2017)</cite> is to play a signaling game with real-life images instead of artificial symbols. This raises new empirical questions that are not answered by the general mathematical results, such as: When the agents do succeed at communicating, what are the input features they rely upon?
motivation
2b6dd9388c43df4416c738b2d1ed5f_0
In this work, we use the datasets released by <cite>(Davidson et al. 2017 )</cite> and HEOT dataset provided by (Mathur et al. 2018) .
uses
2b6dd9388c43df4416c738b2d1ed5f_1
The embeddings were trained on both the datasets provided by <cite>(Davidson et al. 2017 )</cite> and HEOT.
uses
2b6dd9388c43df4416c738b2d1ed5f_2
As indicated by the Figure 1 , the model was initially trained on the dataset provided by <cite>(Davidson et al. 2017)</cite> , and then re-trained on the HEOT dataset so as to benefit from the transfer of learned features in the last stage.
uses
2b6dd9388c43df4416c738b2d1ed5f_3
For comparison purposes, in Table 4 we have also evaluated our results on the dataset by <cite>(Davidson et al. 2017 )</cite>.
uses
2b6dd9388c43df4416c738b2d1ed5f_4
Both the HEOT and <cite>(Davidson et al. 2017 )</cite> datasets contain tweets which are annotated in three categories: offensive, abusive and none (or benign).
background
2b6dd9388c43df4416c738b2d1ed5f_5
Both the HEOT and <cite>(Davidson et al. 2017 )</cite> datasets contain tweets which are annotated in three categories: offensive, abusive and none (or benign). We use a LSTM based classifier model for training our model to classify these tweets into these three categories.
uses
2b6dd9388c43df4416c738b2d1ed5f_6
Results Table 3 shows the performance of our model (after getting trained on <cite>(Davidson et al. 2017)</cite> ) with two types of embeddings in comparison to the models by (Mathur et al. 2018) and <cite>(Davidson et al. 2017 )</cite> on the HEOT dataset averaged over three runs.
uses similarities differences
2b7267b7b192aeca15c0d10a5f0a4b_0
An important work that has relevance here is <cite>[8]</cite> where authors present an even larger movie review dataset of 50,000 movie reviews from IMBD.
background
2b7267b7b192aeca15c0d10a5f0a4b_1
In <cite>[8]</cite> for example, authors who created movie review dataset try on it their probabilistic model that is able to capture semantic similarities between words.
background
2b7267b7b192aeca15c0d10a5f0a4b_2
Our scores on this task are somehow lower than those reported from various studies that explore advanced deep learning constructs on same dataset. In <cite>[8]</cite> for example, authors who created movie review dataset try on it their probabilistic model that is able to capture semantic similarities between words.
differences
2bb41cea97a0375f67eab3a77c3a97_0
Traditional relation-extraction systems rely on manual annotations or domain-specific rules provided by experts, both of which are scarce resources that are not portable across domains. To remedy these problems, recent years have seen interest in the distant supervision approach for relation extraction (Wu and Weld, 2007; <cite>Mintz et al., 2009)</cite> .
motivation
2bb41cea97a0375f67eab3a77c3a97_1
While the largest corpus (Wikipedia and New York Times) employed by recent work on distant supervision<cite> (Mintz et al., 2009</cite>; Hoffmann et al., 2011) contain about 2M documents, we run experiments on a 100M-document (50X more) corpus drawn from ClueWeb.
background
2bb41cea97a0375f67eab3a77c3a97_2
Since<cite> Mintz et al. (2009)</cite> coined the name "distant supervision," there has been growing interest in this technique.
background
2bb41cea97a0375f67eab3a77c3a97_3
At each step of the distant supervision process, we closely follow the recent literature<cite> (Mintz et al., 2009</cite>; .
similarities
2bb41cea97a0375f67eab3a77c3a97_4
Following recent work<cite> (Mintz et al., 2009</cite>; Hoffmann et al., 2011) , we use Freebase 5 as the knowledge base for seed facts.
similarities uses
2bb41cea97a0375f67eab3a77c3a97_5
As in previous work, we impose the constraint that both mentions (m 1 , m 2 ) ∈ R + i are contained in the same sentence<cite> (Mintz et al., 2009</cite>; Hoffmann et al., 2011) .
similarities uses
2bb41cea97a0375f67eab3a77c3a97_6
To generate negative examples for each relation, we follow the assumption in<cite> Mintz et al. (2009)</cite> that relations are disjoint and sample from other relations, i.e., R
similarities uses
2bb41cea97a0375f67eab3a77c3a97_7
Following recent work on distant supervision<cite> (Mintz et al., 2009</cite>; Hoffmann et al., 2011) , we use both lexical and syntactic features.
similarities uses
2bb41cea97a0375f67eab3a77c3a97_8
Interestingly, the Freebase held-out metric<cite> (Mintz et al., 2009</cite>; Hoffmann et al., 2011 ) turns out to be heavily biased toward distantly labeled data (e.g., increasing human feedback hurts precision; see Section 4.6).
differences
2bb41cea97a0375f67eab3a77c3a97_9
In addition to the TAC-KBP benchmark, we also follow prior work<cite> (Mintz et al., 2009</cite>; Hoffmann et al., 2011) and measure the quality using held-out data from Freebase.
differences
2c3a2999390b82f4e29b00d59f90f2_0
The most frequently applied technique in the CoNLL-2003 shared task is the Maximum Entropy Model. Three systems used Maximum Entropy Models in isolation (Bender et al., 2003; Chieu and Ng, 2003; Curran and Clark, 2003) . Two more systems used them in combination with other techniques<cite> (Florian et al., 2003</cite>; Klein et al., 2003) .
background
2c3a2999390b82f4e29b00d59f90f2_1
Hidden Markov Models were employed by four of the systems that took part in the shared task<cite> (Florian et al., 2003</cite>; Klein et al., 2003; Mayfield et al., 2003; Whitelaw and Patrick, 2003) .
background
2c3a2999390b82f4e29b00d59f90f2_2
Zhang and Johnson (2003) used robust risk minimization, which is a Winnow technique. <cite>Florian et al. (2003)</cite> employed the same technique in a combination of learners.
background
2c3a2999390b82f4e29b00d59f90f2_3
Transformation-based learning<cite> (Florian et al., 2003)</cite> , Support Vector Machines (Mayfield et al., 2003) and Conditional Random Fields (McCallum and Li, 2003) were applied by one system each.
background
2c3a2999390b82f4e29b00d59f90f2_4
<cite>Florian et al. (2003)</cite> tested different methods for combining the results of four systems and found that robust risk minimization worked best.
background
2c3a2999390b82f4e29b00d59f90f2_5
One participating team has used externally trained named entity recognition systems for English as a part in a combined system<cite> (Florian et al., 2003)</cite> . with extra information compared to while using only the available training data.
background
2c3a2999390b82f4e29b00d59f90f2_6
The inclusion of extra named entity recognition systems seems to have worked well<cite> (Florian et al., 2003)</cite> .
background
2c3a2999390b82f4e29b00d59f90f2_7
For English, the combined classifier of <cite>Florian et al. (2003)</cite> achieved the highest overall F β=1 rate.
background
2c3a2999390b82f4e29b00d59f90f2_8
<cite>Florian et al. (2003)</cite> have also obtained the highest F β=1 rate for the German data.
background
2c3a2999390b82f4e29b00d59f90f2_9
A majority vote of five systems (Chieu and Ng, 2003;<cite> Florian et al., 2003</cite>; Klein et al., 2003; McCallum and Li, 2003; Whitelaw and Patrick, 2003) performed best on the English development data.
background
2c3a2999390b82f4e29b00d59f90f2_10
The best performance for both languages has been obtained by a combined learning system that used Maximum Entropy Models, transformation-based learning, Hidden Markov Models as well as robust risk minimization<cite> (Florian et al., 2003)</cite> .
background
2cedb1a0f0c0fbb9bd95d5b54e4967_0
Only few approaches have attempted comprehension on multiparty dialogue <cite>Ma, Jurczyk, and Choi [2018]</cite> .
motivation background
2cedb1a0f0c0fbb9bd95d5b54e4967_1
Inspired by various options of analytic models and the potential of the dialogue processing market, we extend the corpus presented by <cite>Ma, Jurczyk, and Choi [2018]</cite> for comprehensive predictions of personal entities in multiparty dialogue and develop deep learning models to make robust inference on their contexts.
uses
2cedb1a0f0c0fbb9bd95d5b54e4967_2
Distinguished from the previous work that only focused on a single variable per passage <cite>Ma, Jurczyk, and Choi [2018]</cite> , we propose two new passage completion tasks on multiparty dialogue which increase the task complexity by replacing more character mentions with variables with a better motivated data split.
extends
2cedb1a0f0c0fbb9bd95d5b54e4967_3
Unlike the above tasks where documents and queries are written in a similar writing style, the multiparty dialogue reading comprehension task introduced by <cite>Ma, Jurczyk, and Choi [2018]</cite> has a very different writing style between dialogues and queries.
background
2cedb1a0f0c0fbb9bd95d5b54e4967_4
Plot summaries of all episodes for the first eight seasons were collected by Jurczyk and Choi [2017] to evaluate a document retrieval task. The rest of the plot summaries were collected by <cite>Ma, Jurczyk, and Choi [2018]</cite> .
background