id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
2cedb1a0f0c0fbb9bd95d5b54e4967_5
Table 1 shows the statistical data of the corpus from <cite>Ma, Jurczyk, and Choi [2018]</cite> . Based on the above corpus we created a new data split different from <cite>Ma, Jurczyk, and Choi [2018]</cite> 's data split.
uses
2cedb1a0f0c0fbb9bd95d5b54e4967_6
In the previous work of <cite>Ma, Jurczyk, and Choi [2018]</cite> , they used a random data split where 1,187 of 1,349 queries in the development set and 1,207 of 1,353 queries in the test set are generated from the same plot summaries as some queries in the training set with only masking the different character entities which makes the model can see the right answer in the training set.
motivation
2cedb1a0f0c0fbb9bd95d5b54e4967_7
We propose three tasks, one is from <cite>Ma, Jurczyk, and Choi [2018]</cite> , and another two tasks are new tasks designed by us.
uses
2cedb1a0f0c0fbb9bd95d5b54e4967_8
We propose three tasks, one is from <cite>Ma, Jurczyk, and Choi [2018]</cite> , and another two tasks are new tasks designed by us. The single variable task from <cite>Ma, Jurczyk, and Choi [2018]</cite> consists a dialogue passage p, a query q which is from plot summary of the dialogue passage and an answer a. In this 1 https://github.com/emorynlp/character-mining task, a query q replaces only one character entity with an unknown variable x and the machine is asked to infer the replaced character entity (answer a) from all the possible entities appear in the dialogue passage p. This task is evaluated by computing the accuracy of predictions (see Section ).
uses
2cedb1a0f0c0fbb9bd95d5b54e4967_9
The single variable task from <cite>Ma, Jurczyk, and Choi [2018]</cite> consists a dialogue passage p, a query q which is from plot summary of the dialogue passage and an answer a. In this 1 https://github.com/emorynlp/character-mining task, a query q replaces only one character entity with an unknown variable x and the machine is asked to infer the replaced character entity (answer a) from all the possible entities appear in the dialogue passage p. This task is evaluated by computing the accuracy of predictions (see Section ).
background
2cedb1a0f0c0fbb9bd95d5b54e4967_10
Based on <cite>Ma, Jurczyk, and Choi [2018]</cite> , we first use CNN to extract the gram-level features of utterances and then use @ent04 asks @ent00 how someone could get a hold of @ent00 's credit card number and @ent00 is surprised at how much was spent .
uses
2cedb1a0f0c0fbb9bd95d5b54e4967_11
This method is the SOTA method last year in <cite>Ma, Jurczyk, and Choi [2018]</cite> 's data split which is also selected as one of our experimental methods.
uses
2cedb1a0f0c0fbb9bd95d5b54e4967_12
Adding a CNN can achieve even lower accuracy because passing sequences to the CNN only keeps important information after the pooling operation, but for dialogue data, most of the time the replaced entity needs to be decided by <cite>Ma, Jurczyk, and Choi [2018]</cite> are not helpful for these tasks on our data split because dialogues contain so many informal expressions and the size of the corpus is small.
motivation
2cedb1a0f0c0fbb9bd95d5b54e4967_13
Results Table 4 shows the results of our experiment. BiL-STM is good at capturing the sequence information of sentences; however, since it only finds some kind of answer distributions on the sequence information, it cannot capture the information of the relation between query and utterance. Adding a CNN can achieve even lower accuracy because passing sequences to the CNN only keeps important information after the pooling operation, but for dialogue data, most of the time the replaced entity needs to be decided by <cite>Ma, Jurczyk, and Choi [2018]</cite> are not helpful for these tasks on our data split because dialogues contain so many informal expressions and the size of the corpus is small.
uses differences
2d2da2e9215691bffad74bfb97dbf3_0
This was the case in SemEval-2013, whose task 2 <cite>(Wilson et al., 2013)</cite> required sentiment analysis of Twitter and SMS text messages.
background
2d2da2e9215691bffad74bfb97dbf3_1
And perhaps this is the cause for lower score in the unconstrained mode, something that happened also with many systems in the past edition <cite>(Wilson et al., 2013)</cite> .
similarities
2d2ec7230a651d1d6786d0f8a71f7e_0
These two lines of research converge in prior work to show, e.g., the increasing association of the lexical item 'gay' with the meaning dimension of homosexuality<cite> (Kim et al., 2014</cite>; Kulkarni et al., 2015) .
background
2d2ec7230a651d1d6786d0f8a71f7e_1
It is thus a continuation of prior work, in which we investigated historical English texts only (Hellrich and Hahn, 2016a) , and also influenced by the design decisions of <cite>Kim et al. (2014)</cite> and Kulkarni et al. (2015) which were the first to use word embeddings in diachronic studies.
uses
2d2ec7230a651d1d6786d0f8a71f7e_2
Word embeddings can be used rather directly for tracking semantic changes, namely by measuring the similarity of word representations generated for one word at different points in time-words which underwent semantic shifts will be dissimilar with themselves. These models must either be trained in a continuous manner where the model for each time span is initialized with its predecessor<cite> (Kim et al., 2014</cite>; Hellrich and Hahn, 2016b) , or a mapping between models for different points in time must be calculated (Kulkarni et al., 2015; Hamilton et al., 2016) . The first approach cannot be performed in parallel and is thus rather time-consuming, if texts are not subsampled.
motivation background
2d2ec7230a651d1d6786d0f8a71f7e_3
These models must either be trained in a continuous manner where the model for each time span is initialized with its predecessor<cite> (Kim et al., 2014</cite>; Hellrich and Hahn, 2016b) , or a mapping between models for different points in time must be calculated (Kulkarni et al., 2015; Hamilton et al., 2016) .
background
2d2ec7230a651d1d6786d0f8a71f7e_4
The averaged cosine values between word embeddings before and after an epoch are used as a convergence measure c<cite> (Kim et al., 2014</cite>; Kulkarni et al., 2015) .
uses
2d2ec7230a651d1d6786d0f8a71f7e_5
The convergence criterion proposed by Kulkarni et al. (2015) , i.e., c = 0.9999, was never reached (this observation might be explained by Kulkarni et al.'s decision not to reset the learning rate for each training epoch, as was done by us and <cite>Kim et al. (2014)</cite> ).
similarities
2d7e98487698b0b6ae85f052402f7c_0
Prosodic Cues for DA Recognition: It has also been noted that prosodic knowledge plays a major role in DA identification for certain DA types<cite> Stolcke et al., 2000)</cite> . The main reason is that the acoustic signal of the same utterance can be very different in a different DA class. This indicates that if one wants to classify DA classes only from the text, the context must be an important aspect to consider: simply classifying single utterances might not be enough, but considering the preceding utterances as a context is important.
background
2d7e98487698b0b6ae85f052402f7c_1
Lexical, Prosodic, and Syntactic Cues: Many studies have been carried out to find out the lexical, prosodic and syntactic cues <cite>(Stolcke et al., 2000</cite>; Surendran and Levow, 2006; O'Shea et al., 2012; Yang et al., 2014) .
background
2d7e98487698b0b6ae85f052402f7c_2
For the SwDA corpus, the state-of-the-art baseline result was 71% for more than a decade using a standard Hidden Markov Model (HMM) with language features such as words and n-grams<cite> (Stolcke et al., 2000)</cite> . The inter-annotator agreement accuracy for the same corpus is 84%, and in this particular case, we are still far from achieving human accuracy. However, words like 'yeah' appear in many classes such as backchannel, yes-answer, agree/accept etc.
motivation background
2d7e98487698b0b6ae85f052402f7c_3
We follow the same data split of 1115 training and 19 test conversations as in the baseline approach <cite>(Stolcke et al., 2000</cite>; Kalchbrenner and Blunsom, 2013) .
uses
2db25254f275303c41f1e7ab15a5e0_0
However, Sporleder and Lascarides (2008) show that models trained on explicitly marked examples generalize poorly to implicit relation identification. They argued that explicit and implicit examples may be linguistically dissimilar, as writers tend to avoid discourse connectives if the discourse relation could be inferred from context (Grice, 1975) . Similar observations are made by <cite>Rutherford and Xue (2015)</cite> , who attempt to add automatically-labeled instances to improve supervised classification of implicit discourse relations. In this paper, we approach this problem from the perspective of domain adaptation.
motivation background
2db25254f275303c41f1e7ab15a5e0_1
<cite>Rutherford and Xue (2015)</cite> explore several selection heuristics for adding automatically-labeled examples from Gigaword to their system for implicit relation detection, obtaining a 2% improvement in Macro-F 1 . Our work differs from these previous efforts in that we focus exclusively on training from automaticallylabeled explicit instances, rather than supplementing a training set of manually-labeled implicit examples.
differences background
2db25254f275303c41f1e7ab15a5e0_2
It may also be desirable to ensure that the source and target training instances are similar in terms of their observed features; this is the idea behind the instance weighting approach to domain adaptation (Jiang and Zhai, 2007) . Motivated by this idea, we require that sampled instances from the source domain have a cosine similarity of at least τ with at least one target domain instance<cite> (Rutherford and Xue, 2015)</cite> .
background similarities
2db25254f275303c41f1e7ab15a5e0_3
In a pilot study we found that larger amounts of additional training data yielded no further improvements, which is consistent with the recent results of <cite>Rutherford and Xue (2015)</cite> .
similarities
2db25254f275303c41f1e7ab15a5e0_4
We have presented two methods -feature representation learning and resampling -from domain adaptation to close the gap of using explicit examples for unsupervised implicit discourse relation identification. Future work will explore the combination of this approach with more sophisticated techniques for instance selection<cite> (Rutherford and Xue, 2015)</cite> and feature selection (Park and Cardie, 2012; Biran and McKeown, 2013) , while also tackling the more difficult problems of multi-class relation classification and fine-grained level-2 discourse relations.
future_work
2eaa48dbc5e42a5934e905ec2288ac_0
Although traditional AES methods typically rely on handcrafted features (Larkey, 1998; Foltz et al., 1999; Attali and Burstein, 2006; Dikli, 2006; Wang and Brown, 2008; Chen and He, 2013; Somasundaran et al., 2014; Yannakoudakis et al., 2014; Phandi et al., 2015) , recent results indicate that state-of-the-art deep learning methods reach better performance (Alikaniotis et al., 2016; Dong and Zhang, 2016; Taghipour and Ng, 2016; Song et al., 2017; <cite>Tay et al., 2018</cite>) , perhaps because <cite>these methods</cite> are able to capture subtle and complex information that is relevant to the task (Dong and Zhang, 2016) .
background
2eaa48dbc5e42a5934e905ec2288ac_1
The empirical results indicate that our approach yields a better performance than state-of-the-art approaches (Phandi et al., 2015; Dong and Zhang, 2016; <cite>Tay et al., 2018</cite>) .
differences
2eaa48dbc5e42a5934e905ec2288ac_2
Since the official test data of the ASAP competition is not released to the public, we, as well as others before us (Phandi et al., 2015; Dong and Zhang, 2016; 1 https://www.kaggle.com/c/asap-aes/data <cite>Tay et al., 2018</cite>) , use only the training data in our experiments.
similarities
2eaa48dbc5e42a5934e905ec2288ac_3
We compare our approach with stateof-the-art methods based on handcrafted features (Phandi et al., 2015) , as well as deep features (Dong and Zhang, 2016; <cite>Tay et al., 2018</cite>) .
uses
2eaa48dbc5e42a5934e905ec2288ac_4
We used functions from the VLFeat li- Table 2 : In-domain automatic essay scoring results of our approach versus several state-of-the-art methods (Phandi et al., 2015; Dong and Zhang, 2016; <cite>Tay et al., 2018</cite>) .
differences
2eaa48dbc5e42a5934e905ec2288ac_5
We first note that the histogram intersection string kernel alone reaches better overall performance (0.780) than all previous works (Phandi et al., 2015; Dong and Zhang, 2016; <cite>Tay et al., 2018</cite>) .
differences
2eaa48dbc5e42a5934e905ec2288ac_6
Although the BOSWE model can be regarded as a shallow approach, its overall results are comparable to those of deep learning approaches (Dong and Zhang, 2016; <cite>Tay et al., 2018</cite>) .
similarities
2eaa48dbc5e42a5934e905ec2288ac_7
The average QWK score of HISK and BOSWE (0.785) is more than 2% better the average scores of the best-performing state-of-the-art approaches <cite>Tay et al., 2018</cite>) .
differences
2eaa48dbc5e42a5934e905ec2288ac_8
We compared our approach on the Automated Student Assessment Prize data set, in both in-domain and crossdomain settings, with several state-of-the-art approaches (Phandi et al., 2015; Dong and Zhang, 2016; <cite>Tay et al., 2018</cite>) .
uses
2eaa48dbc5e42a5934e905ec2288ac_9
Using a shallow approach, we report better results compared to recent deep learning approaches (Dong and Zhang, 2016; <cite>Tay et al., 2018</cite>) .
differences
2ef456a3f6b043350121c4c5cfd404_0
Hence, an adaptive IS may use a large number of samples to solve this problem whereas NCE is more stable and requires a fixed small number of noise samples (e.g., 100) to achieve a good performance [13, <cite>16]</cite> .
background
2ef456a3f6b043350121c4c5cfd404_1
To alleviate this problem, noise samples can be shared across the batch<cite> [16]</cite> .
background
2ef456a3f6b043350121c4c5cfd404_2
Furthermore, we can show that this solution optimally approximates the sampling from a unigram distribution, which has been shown to be a good noise distribution choice [13, <cite>16]</cite> .
background
2ef456a3f6b043350121c4c5cfd404_3
This can be done by simply drawing an additional K samples form the noise distribution pn, and share them across the batch as it was done in<cite> [16]</cite> .
background
2ef456a3f6b043350121c4c5cfd404_4
Each of the models is trained using the proposed B-NCE approach and the shared noise NCE (S-NCE)<cite> [16]</cite> .
uses
2ef456a3f6b043350121c4c5cfd404_6
Following the setup proposed in [13, <cite>16]</cite> , S-NCE uses K = 100 noise samples, whereas B-NCE uses only the target words in the batch (K=0).
uses
2ef456a3f6b043350121c4c5cfd404_7
Moreover, the performance of the small ReLu-LSTM is comparable to the LSTM models proposed in<cite> [16]</cite> and [18] which use large hidden layers.
background
2f7b64db6939786a5026fc033c85bd_0
Until recently, GRE algorithms have focussed on the generation of distinguishing descriptions that are either as short as possible (e.g. (Dale, 1992; Gardent, 2002) ) or almost as short as possible (e.g. <cite>(Dale and Reiter, 1995)</cite> ).
background
2f7b64db6939786a5026fc033c85bd_1
allow the Full Brevity algorithm (Dale, 1992) to be viewed as minimising cost(S), and the incremental algorithm <cite>(Dale and Reiter, 1995)</cite> as hill-climbing (strictly, hill-descending), guided by the property-ordering which that algorithm requires.
background
2f7b64db6939786a5026fc033c85bd_2
Standard GRE algorithms assume that the speaker knows what the hearer knows <cite>(Dale and Reiter, 1995)</cite> .
background
2fbf5397a8219923d1d9bc0464cb59_0
Related work on exploring syntactic structured information in pronoun resolution can be typically classified into three categories: parse tree-based search algorithms ( Hobbs 1978) , feature-based (Lappin and Leass 1994; Bergsma and Lin 2006) and tree kernel-based methods<cite> (Yang et al 2006)</cite> .
background
2fbf5397a8219923d1d9bc0464cb59_1
As for tree kernel-based methods, <cite>Yang et al (2006)</cite> captured syntactic structured information for pronoun resolution by using the convolution tree kernel (Collins and Duffy 2001) to measure the common sub-trees enumerated from the parse trees and achieved quite success on the ACE 2003 corpus.
background
2fbf5397a8219923d1d9bc0464cb59_2
Compared with Collins and Duffy's kernel and its application in pronoun resolution<cite> (Yang et al 2006)</cite> , the context-sensitive convolution tree kernel enumerates not only context-free sub-trees but also context-sensitive sub-trees by taking their ancestor node paths into consideration.
background
2fbf5397a8219923d1d9bc0464cb59_3
To deal with the cases that an anaphor and an antecedent candidate do not occur in the same sentence, we construct a pseudo parse tree for an entire text by attaching the parse trees of all its sentences to an upper "S " node, similar to <cite>Yang et al (2006)</cite> .
similarities
2fbf5397a8219923d1d9bc0464cb59_4
Figure 2 shows the three tree span schemes explored in <cite>Yang et al (2006)</cite> : MinExpansion (only including the shortest path connecting the anaphor and the antecedent candidate), Simple-Expansion (containing not only all the nodes in Min-Expansion but also the first level children of these nodes) and Full-Expansion (covering the sub-tree between the anaphor and the candidate), such as the sub-trees inside the dash circles of Figures 2(a) , 2(b) and 2(c) respectively.
background
2fbf5397a8219923d1d9bc0464cb59_5
It is found<cite> (Yang et al 2006)</cite> that the simpleexpansion tree span scheme performed best on the ACE 2003 corpus in pronoun resolution.
background
2fbf5397a8219923d1d9bc0464cb59_6
This convolution tree kernel has been successfully applied by <cite>Yang et al (2006)</cite> in pronoun resolution.
background
2fbf5397a8219923d1d9bc0464cb59_7
Table 1 systematically evaluates the impact of different m in our context-sensitive convolution tree kernel and compares our dynamic-expansion tree span scheme with the existing three tree span schemes, min-, simple-and full-expansions as described in <cite>Yang et al (2006)</cite> .
similarities uses
2fdfa1b36fcf0d77826c96101ac428_0
To address the model design issue, we discuss several recent solutions (He et al., 2016b; Li et al., 2016; <cite>Xiong et al., 2017)</cite> .
background
2fdfa1b36fcf0d77826c96101ac428_1
To address the model design issue, we discuss several recent solutions (He et al., 2016b; Li et al., 2016; <cite>Xiong et al., 2017)</cite> . We then focus on a new case study of hierarchical deep reinforcement learning for video captioning (Wang et al., 2018b) , discussing the techniques of leveraging hierarchies in DRL for NLP generation problems.
differences
2fdfa1b36fcf0d77826c96101ac428_2
We outline the applications of deep reinforcement learning in NLP, including dialog (Li et al., 2016) , semi-supervised text classification (Wu et al., 2018) , coreference (Clark and Manning, 2016; Yin et al., 2018) , knowledge graph reasoning<cite> (Xiong et al., 2017</cite> ), text games (Narasimhan et al., 2015; He et al., 2016a) , social media (He et al., 2016b; Zhou and Wang, 2018) , information extraction (Narasimhan et al., 2016; Qin et al., 2018) , language and vision (Pasunuru and Bansal, 2017; Misra et al., 2017; Wang et al., 2018a,b,c; Xiong et al., 2018) , etc.
background
2fdfa1b36fcf0d77826c96101ac428_3
To address the model design issue, we discuss several recent solutions (He et al., 2016b; Li et al., 2016; <cite>Xiong et al., 2017)</cite> .
background
2fdfa1b36fcf0d77826c96101ac428_4
To address the model design issue, we discuss several recent solutions (He et al., 2016b; Li et al., 2016; <cite>Xiong et al., 2017)</cite> . We then focus on a new case study of hierarchical deep reinforcement learning for video captioning (Wang et al., 2018b) , discussing the techniques of leveraging hierarchies in DRL for NLP generation problems.
differences
304773c64de1f0906f0246f2aa0d29_0
To extract opinion targets, pervious approaches usually relied on opinion words which are the words used to express the opinions (Hu and Liu, 2004a; Popescu and Etzioni, 2005; Liu et al., 2005; Wang and Wang, 2008; Qiu et al., 2011;<cite> Liu et al., 2012)</cite> .
background
304773c64de1f0906f0246f2aa0d29_1
To resolve these problems,<cite> Liu et al. (2012)</cite> formulated identifying opinion relations between words as an monolingual alignment process.
background
304773c64de1f0906f0246f2aa0d29_2
Although <cite>(Liu et al., 2012)</cite> had proved the effectiveness of WAM, they mainly performed experiments on the dataset with medium size.
motivation
304773c64de1f0906f0246f2aa0d29_3
<cite>(Liu et al., 2012)</cite> formulated identifying opinion relations between words as an alignment process.
background
304773c64de1f0906f0246f2aa0d29_4
We notice these two methods ( <cite>(Liu et al., 2012)</cite> and (Liu et al., 2013) ) only performed experiments on the corpora with a medium size.
motivation
304773c64de1f0906f0246f2aa0d29_5
To extract opinion targets from reviews, we adopt the framework proposed by <cite>(Liu et al., 2012)</cite> , which is a graph-based extraction framework and has two main components as follows.
uses
304773c64de1f0906f0246f2aa0d29_6
In this paper, we assume opinion targets to be nouns or noun phrases, and opinion words may be adjectives or verbs, which are usually adopted by (Hu and Liu, 2004a; Qiu et al., 2011; Wang and Wang, 2008;<cite> Liu et al., 2012)</cite> .
similarities
304773c64de1f0906f0246f2aa0d29_7
Similar to <cite>(Liu et al., 2012)</cite> , every sentence in reviews is replicated to generate a parallel sentence pair, and the word alignment algorithm is applied to the monolingual scenario to align a noun/noun phase with its modifiers.
uses
304773c64de1f0906f0246f2aa0d29_8
Then, similar to <cite>(Liu et al., 2012)</cite> , the association between an opinion target candidate and its modifier is estimated as follows.
uses
304773c64de1f0906f0246f2aa0d29_9
In the second component, we adopt a graph-based algorithm used in <cite>(Liu et al., 2012)</cite> to compute the confidence of each opinion target candidate, and the candidates with higher confidence than the threshold will be extracted as the opinion targets.
uses
304773c64de1f0906f0246f2aa0d29_10
Similar to <cite>(Liu et al., 2012)</cite> , we set each item in , where tf (v) is the term frequency of v in the corpus, and df (v) is computed by using the Google n-gram corpus 2 .
uses
304773c64de1f0906f0246f2aa0d29_11
In this section, to answer the questions mentioned in the first section, we collect a large collection named as LARGE, which includes reviews from three different domains and different languages. This collection was also used in <cite>(Liu et al., 2012)</cite> .
similarities
304773c64de1f0906f0246f2aa0d29_12
To further prove the effectiveness of our combination, we compare PSWAM with some state-of-the-art methods, including Hu (Hu and Liu, 2004a) , which extracted frequent opinion target words based on association mining rules, DP (Qiu et al., 2011) , which extracted opinion targets through syntactic patterns, and LIU <cite>(Liu et al., 2012)</cite> , which fulfilled this task by using unsupervised WAM.
uses
304773c64de1f0906f0246f2aa0d29_13
To further prove the effectiveness of our combination, we compare PSWAM with some state-of-the-art methods, including Hu (Hu and Liu, 2004a) , which extracted frequent opinion target words based on association mining rules, DP (Qiu et al., 2011) , which extracted opinion targets through syntactic patterns, and LIU <cite>(Liu et al., 2012)</cite> , which fulfilled this task by using unsupervised WAM. The parameter settings in these baselines are the same as the settings in the original papers.
uses
30718e751f18432c2478442530267e_0
According to<cite> Jia and Liang (2017)</cite> , the single BiDAF system (Seo et al., 2016) only achieves an F1 score of 4.8 on the ADDANY adversarial dataset.
background
30718e751f18432c2478442530267e_1
According to<cite> Jia and Liang (2017)</cite> , the single BiDAF system (Seo et al., 2016) only achieves an F1 score of 4.8 on the ADDANY adversarial dataset. In this paper, we present a method to tackle this problem via answer sentence selection.
motivation
30718e751f18432c2478442530267e_2
However,<cite> Jia and Liang (2017)</cite> show that these systems are very vulnerable to paragraphs with adversarial sentences.
background
30718e751f18432c2478442530267e_3
Besides the single BiDAF, the single Match LSTM, the ensemble Match LSTM, and the ensemble BiDAF achieve an F1 of 7.6, 11.7, and 2.7 respectively in question answering on ADDANY adversarial dataset<cite> (Jia and Liang, 2017)</cite> .
background
30718e751f18432c2478442530267e_4
Besides the single BiDAF, the single Match LSTM, the ensemble Match LSTM, and the ensemble BiDAF achieve an F1 of 7.6, 11.7, and 2.7 respectively in question answering on ADDANY adversarial dataset<cite> (Jia and Liang, 2017)</cite> . Therefore, question answering with adversarial sentences in paragraphs is a prominent issue and is the focus of this study.
background motivation
30718e751f18432c2478442530267e_5
Our test set is<cite> Jia and Liang (2017)</cite>'s ADDANY adversarial dataset.
uses
30718e751f18432c2478442530267e_6
The performance of question answering is evaluated by the Macro-averaged F1 score (Rajpurkar <cite>Jia and Liang, 2017)</cite> .
uses
30718e751f18432c2478442530267e_7
However, following the idea of adversarial examples in image recognition (Goodfellow et al., 2014; Kurakin et al., 2016; Papernot et al., 2016) ,<cite> Jia and Liang (2017)</cite> point out the unreliability of existing question answering models in the presence of adversarial sentences.
background
30718e751f18432c2478442530267e_8
However, following the idea of adversarial examples in image recognition (Goodfellow et al., 2014; Kurakin et al., 2016; Papernot et al., 2016) ,<cite> Jia and Liang (2017)</cite> point out the unreliability of existing question answering models in the presence of adversarial sentences. In this study, we propose a method to tackle this problem through answer sentence selection.
background motivation
30718e751f18432c2478442530267e_9
However,<cite> Jia and Liang (2017)</cite> also present the deterioration of QA systems on another dataset, ADDSENT adversarial dataset.
similarities
311b238406da4891c09cb9c3c0334d_0
This makes the task more difficult, compared to the sentiment analysis, but it can often bring complementary information <cite>[3]</cite> .
background
311b238406da4891c09cb9c3c0334d_1
We preprocessed the Czech commentaries by the same rules as in the original system <cite>[3]</cite> (for example: all urls were replaced by keyword URL, links to images are replaced by IMGURL, only letters are preserved, the rest of the characters is removed, …).
uses
311b238406da4891c09cb9c3c0334d_2
The original system <cite>[3]</cite> used more features, which could not be easily applied on Czech commentaries.
differences
311b238406da4891c09cb9c3c0334d_3
We did not identify strong candidates to build a domain specific dictionary as in <cite>[3]</cite> .
differences
3188ee1583a9c711cf147fc596768d_0
The techniques examined are Structural Correspondence Learning (SCL)<cite> (Blitzer et al., 2006)</cite> and Self-training (Abney, 2007; McClosky et al., 2006) .
background
3188ee1583a9c711cf147fc596768d_1
We examine Structural Correspondence Learning (SCL)<cite> (Blitzer et al., 2006)</cite> for this task, and compare it to several variants of Self-training (Abney, 2007; McClosky et al., 2006) .
similarities
3188ee1583a9c711cf147fc596768d_2
So far, Structural Correspondence Learning has been applied successfully to PoS tagging and Sentiment Analysis<cite> (Blitzer et al., 2006</cite>; ).
background
3188ee1583a9c711cf147fc596768d_3
Structural Correspondence Learning<cite> (Blitzer et al., 2006)</cite> exploits unlabeled data from both source and target domain to find correspondences among features from different domains.
background
3188ee1583a9c711cf147fc596768d_4
Pivots are features occurring frequently and behaving similarly in both domains<cite> (Blitzer et al., 2006)</cite> .
background
3188ee1583a9c711cf147fc596768d_5
Intuitively, if we are able to find good correspondences through 'linking' pivots, then the augmented source data should transfer better to a target domain<cite> (Blitzer et al., 2006)</cite> .
similarities
3188ee1583a9c711cf147fc596768d_6
So far, pivot features on the word level were used<cite> (Blitzer et al., 2006</cite>; .
background
3188ee1583a9c711cf147fc596768d_7
In our empirical setup, we follow<cite> Blitzer et al. (2006)</cite> and balance the size of source and target data.
similarities uses
3188ee1583a9c711cf147fc596768d_8
The paper compares Structural Correspondence Learning<cite> (Blitzer et al., 2006)</cite> with (various instances of) self-training (Abney, 2007; McClosky et al., 2006) for the adaptation of a parse selection model to Wikipedia domains.
similarities
31b06dfc081149e1e436f0bb5e0904_0
As a global trend, we observe that models that incorporate rich global features are typically more accurate, even if pruning is necessary or decoding needs to be approximate Koo and Collins, 2010; Bohnet and Nivre, 2012; Martins et al., 2009<cite> Martins et al., , 2013</cite> .
motivation background
31b06dfc081149e1e436f0bb5e0904_1
The parser was built as an extension of a recent dependency parser, TurboParser (Martins et al., 2010<cite> (Martins et al., , 2013</cite> , with the goal of performing semantic parsing using any of the three formalisms considered in the shared task (DM, PAS, and PSD).
uses
31b06dfc081149e1e436f0bb5e0904_2
Most of these features were taken from TurboParser <cite>(Martins et al., 2013)</cite> , and others were inspired by the semantic parser of Johansson and Nugues (2008) .
uses
31e8c524f05495fdd87bfac6fbecc8_0
We present a reproduction and extension to the work of <cite>Schulder et al. (2017)</cite> , <cite>which</cite> introduced a lexicon of verbal polarity shifters, as well as methods to increase the size of this lexicon through bootstrapping.
extends uses