id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
22dc2a38e29a1f5ac55c9ac220782b_7
Contrary to <cite>Vaswani et al. (2017)</cite> , we only use a single attention head, with attention performed on the complete sequence with constant d-dimensional inputs.
differences
23119eff3cfd71370e8ad408fc75e1_0
Very recently,<cite> Lee et al. (2017)</cite> proposed the first state-of-the-art end-to-end neural coreference resolution system.
background
23119eff3cfd71370e8ad408fc75e1_1
We adopt the same span representation approach as in<cite> Lee et al. (2017)</cite> using bidirectional LSTMs and a headfinding attention.
similarities uses
23119eff3cfd71370e8ad408fc75e1_2
Compared with the traditional FFNN approach in<cite> Lee et al. (2017)</cite> , biaffine attention directly models both the compatibility of s i and s j byŝ j U biŝi and the prior likelihood of s i having an antecedent by v biŝ i .
uses similarities
23119eff3cfd71370e8ad408fc75e1_3
Therefore,<cite> Lee et al. (2017)</cite> train the model end-to-end by maximizing the following marginal log-likelihood where GOLD(i) are gold antecedents for s i :
background
23119eff3cfd71370e8ad408fc75e1_4
Implementation Details For fair comparisons, we follow the same hyperparameters as in<cite> Lee et al. (2017)</cite> .
similarities uses
23119eff3cfd71370e8ad408fc75e1_5
F1 Our model (single) 67.8 without mention detection loss 67.5 without biaffine attention 67.4<cite> Lee et al. (2017)</cite> 67.3 Table 2 : Ablation study on the development set.
background
23119eff3cfd71370e8ad408fc75e1_6
Based on the results on the development set, λ detection = 0.1 works best from {0.05, 0.1, 0.5, 1.0}. Model is trained with ADAM optimizer (Kingma and Ba, 2015) and converges in around 200K updates, which is faster than that of<cite> Lee et al. (2017)</cite> . In particular, compared with<cite> Lee et al. (2017)</cite> , our improvement mainly results from the precision scores.
differences
23119eff3cfd71370e8ad408fc75e1_7
While Moosavi and Strube (2017) observe that there is a large overlap between the gold mentions of the training and dev (test) sets, we find that our model can correctly detect 1048 mentions which are not detected by<cite> Lee et al. (2017)</cite> , consisting of 386 mentions existing in training data and 662 mentions not existing in training data.
differences
23119eff3cfd71370e8ad408fc75e1_8
(2) Mention-ranking models explicitly rank all previous candidate mentions for the current mention and select a single highest scoring antecedent for each anaphoric mention (Denis and Baldridge, 2007b; Wiseman et al., 2015; Clark and Manning, 2016a; <cite>Lee et al., 2017)</cite> .
background
24506b0aa7a859eb8744e390f9fb60_0
Meanwhile, several previous works (Carreras, 2007; <cite>Koo and Collins, 2010)</cite> have shown that grandchild interactions provide important information for dependency parsing.
background
24506b0aa7a859eb8744e390f9fb60_1
Meanwhile, several previous works (Carreras, 2007; <cite>Koo and Collins, 2010)</cite> have shown that grandchild interactions provide important information for dependency parsing. However, the computational cost of the parsing algorithm increases with the need for more expressive factorizations.
motivation
24506b0aa7a859eb8744e390f9fb60_2
However, the computational cost of the parsing algorithm increases with the need for more expressive factorizations. Consequently, the existing most powerful parser<cite> (Koo and Collins, 2010</cite> ) is limited to third-order parts, which requires O(n 4 ) time and O(n 3 ) space.
motivation
24506b0aa7a859eb8744e390f9fb60_3
Following<cite> Koo and Collins (2010)</cite> , we refer to these augmented structures as g-spans.
uses
24506b0aa7a859eb8744e390f9fb60_4
Following previous works (McDonald and Pereira, 2006; <cite>Koo and Collins, 2010)</cite> , the fourthorder parser captures not only features associated with corresponding fourth-order grand-trisibling parts, but also the features of relevant lower-order parts that are enclosed in its factorization.
uses
24506b0aa7a859eb8744e390f9fb60_5
The second set of features is defined as backed-off features<cite> (Koo and Collins, 2010)</cite> for grand-tri-sibling part (g, s, r, m, t)-the 4-gram (g, r, m, t), which never exist in any lower-order part.
uses
24506b0aa7a859eb8744e390f9fb60_6
Following<cite> Koo and Collins (2010)</cite> , two versions of POS tags are used for any features involve POS: one using is normal POS tags and another is a coarsened version of the POS tags.
uses
24506b0aa7a859eb8744e390f9fb60_7
We compare our method to first-order and secondorder sibling dependency parsers (McDonald and Pereira, 2006) , and two third-order graphbased parsers<cite> (Koo and Collins, 2010)</cite> .
uses
24506b0aa7a859eb8744e390f9fb60_8
Our results are also better than the results of the two third-order graph-based dependency parsing models in<cite> Koo and Collins (2010)</cite> .
differences
24506b0aa7a859eb8744e390f9fb60_9
Here we compare our method to an implement of the third-order grand-sibling parser -whose parsing performance on CTB is not reported in<cite> Koo and Collins (2010)</cite> , and the dynamic programming transition-based parser of Huang and Sagae (2010) .
differences
247bbc4eb671895222065ed425f968_0
Several sites have made significant progress to lower the WER to within the 5%-10% range on the Switchboard-CallHome subsets of the Hub5 2000 evaluation <cite>[2</cite>, 3, 4, 5] .
background
247bbc4eb671895222065ed425f968_1
Several sites have made significant progress to lower the WER to within the 5%-10% range on the Switchboard-CallHome subsets of the Hub5 2000 evaluation <cite>[2</cite>, 3, 4, 5] . Given the progress on conversational telephone speech, we focus on the other closely related broadcast news recognition task that received similar attention within the DARPA EARS program.
motivation
247bbc4eb671895222065ed425f968_2
In terms of the amount of training data available from the DARPA EARS program for training systems on CTS and BN, there are a few significant differences as well. The CTS acoustic training corpus consists of approximately 2000 hours of speech with human transcriptions <cite>[2]</cite> . In other words, models being developed for BN typically use lightly supervised transcripts for training [6] .
background
247bbc4eb671895222065ed425f968_3
In <cite>[2,</cite> 3] we describe state-of-the-art speech recognition systems on the CTS task using multiple LSTM and ResNet acoustic models trained on various acoustic features along with word and character LSTMs and convolutional WaveNet-style language models. In this paper we develop a similar but simpler variant for BN.
similarities motivation
247bbc4eb671895222065ed425f968_4
In addition to automatic speech recognition results, similar to <cite>[2]</cite> , we also present human performance on the same BN test sets.
similarities
247bbc4eb671895222065ed425f968_5
Similar to <cite>[2]</cite> , human performance measurements on two broadcast news tasks -RT04 and DEV04F -are carried out by Appen.
similarities
247bbc4eb671895222065ed425f968_6
The transcriptions were also filtered to remove non-speech markers, partial words, punctuation marks etc as described in <cite>[2]</cite> .
uses
247bbc4eb671895222065ed425f968_8
In <cite>[2]</cite> , two kinds of acoustic models, a convolutional and a non-convolutional acoustic model with comparable performance, are used since they produce good complementary outputs which can be further combined for improved performance. The convolutional network used in that work is a residual network (ResNet) and an LSTM is used as the non-convolutional network. Similar to that work, in this paper also we train ResNet and LSTM based acoustic models.
similarities
247bbc4eb671895222065ed425f968_9
To complement the LSTM acoustic model, we train a deep Residual Network based on the best performing architecture proposed in <cite>[2]</cite> .
uses
247bbc4eb671895222065ed425f968_10
In comparison with the results obtained on the CTS evaluation with similar acoustic models <cite>[2]</cite> , the LSTM and ResNet operate at similar WERs.
similarities
247bbc4eb671895222065ed425f968_11
We observe significant WER gains after using the LSTM LMs similar to those reported in <cite>[2]</cite> .
similarities
247bbc4eb671895222065ed425f968_12
4. Compared to the telephone conversation confusions recorded in <cite>[2]</cite> -one symbol that is clearly missing is the back-channel response -this is probably from the very nature of the BN domain.
differences
247bbc4eb671895222065ed425f968_13
5. Similar to telephone conversation confusions reported in <cite>[2]</cite> , humans performance is much higher because the number of deletions is significantly lower -compare 2.3% vs 0.8%/0.6% for deletion errors in Table 5 .
similarities
24b38363d53468175e0274ac0b4fd3_0
Excitingly, the state of the art has recently shifted toward novel semi-supervised techniques such as the incorporation of word embeddings to represent the context of words and concepts<cite> (Tang et al., 2014b)</cite> .
background
24b38363d53468175e0274ac0b4fd3_1
In previous work (Tang et al., 2014a; <cite>Tang et al., 2014b)</cite> sentiment-specific word embeddings have been used as features for identification of tweet-level sentiment but not phrase-level sentiment.
background
24b38363d53468175e0274ac0b4fd3_2
In previous work (Tang et al., 2014a; <cite>Tang et al., 2014b)</cite> sentiment-specific word embeddings have been used as features for identification of tweet-level sentiment but not phrase-level sentiment. In this work we present two different strategies for learning phrase level sentiment specific word embeddings.
motivation background
24b38363d53468175e0274ac0b4fd3_3
For each strategy, class and dimension, we used the functions suggested by<cite> (Tang et al., 2014b</cite> ) (average, maximum and minimum), resulting in 2,400 features.
uses
24b38363d53468175e0274ac0b4fd3_4
We also employed the word embeddings encoding sentiment information generated through the unified models in<cite> (Tang et al., 2014b)</cite> .
uses
24b38363d53468175e0274ac0b4fd3_5
Contrary to the approach by<cite> (Tang et al., 2014b)</cite> , we didn't integrate the sentiment information in the word embeddings training process, but rather the sentiment-specific nature of the embeddings was reflected in the choice of different training datasets, yielding different word embedding features for positive and negative tweets.
differences
24ee9b2bd8c97cbe923bc747b09806_0
Our work is most closely related to the models presented in [12, 13, 14,<cite> 15]</cite> .
similarities
24ee9b2bd8c97cbe923bc747b09806_1
Our work is most closely related to the models presented in [12, 13, 14,<cite> 15]</cite> . In the current study we improve upon these previous approaches to visual grounding of speech and present state-of-the-art image-caption retrieval results.
extends similarities
24ee9b2bd8c97cbe923bc747b09806_2
The work by [12, 13, 14,<cite> 15]</cite> and the results presented here are a step towards more cognitively plausible models of language learning as it is more natural to learn language without prior assumptions about the lexical level.
background
24ee9b2bd8c97cbe923bc747b09806_3
The approach is based on our own text-based model described in [8] and on the speech-based models described in [13,<cite> 15]</cite> and we refer to those studies for more details.
uses background
24ee9b2bd8c97cbe923bc747b09806_4
We use importance sampling to select the mismatched pairs; rather than using all the other samples in the mini-batch as mismatched pairs (as done in [8,<cite> 15]</cite> ), we calculate the loss using only the hardest examples (i.e. mismatched pairs with high cosine similarity).
differences
24ee9b2bd8c97cbe923bc747b09806_5
The main differences with the approaches described in [13,<cite> 15]</cite> are the use of multi-layered GRUs, importance sampling, the cyclic learning rate, snapshot ensembling and the use of vectorial rather than scalar attention.
differences
24ee9b2bd8c97cbe923bc747b09806_6
While our model is not explicitly trained to recognise words or segment the speech signal, previous work has shown that such information can be extracted by visual grounding models<cite> [15,</cite> 28] .
background
24ee9b2bd8c97cbe923bc747b09806_7
<cite>[15]</cite> use a binary decision task: given a word and a sentence embedding, decide if the word occurs in the sentence.
background
24ee9b2bd8c97cbe923bc747b09806_8
We compare our models to [12] and <cite>[15]</cite> , and include our own character-based model for comparison.
uses
24ee9b2bd8c97cbe923bc747b09806_9
[12] is a convolutional approach, whereas <cite>[15]</cite> is an approach using recurrent highway networks with scalar attention.
background
24ee9b2bd8c97cbe923bc747b09806_10
We compare our models to [12] and <cite>[15]</cite> , and include our own character-based model for comparison. [12] is a convolutional approach, whereas <cite>[15]</cite> is an approach using recurrent highway networks with scalar attention.
uses
24ee9b2bd8c97cbe923bc747b09806_11
The largest improvement comes from using the learned MBN features but our approach also improves results for MFCCs, which are the same features as were used in <cite>[15]</cite> .
uses
24ee9b2bd8c97cbe923bc747b09806_12
We are currently collecting the Semantic Textual Similarity (STS) database in spoken format and the next step will be to investigate whether the model presented here also learns to capture sentence level semantic information and understand language in a deeper sense than recognising word presence. The work presented in <cite>[15]</cite> has made the first efforts in this regard and we aim to extend this to a larger database with sentences from multiple domains.
extends
24ee9b2bd8c97cbe923bc747b09806_13
The work presented in <cite>[15]</cite> has made the first efforts in this regard and we aim to extend this to a larger database with sentences from multiple domains.
background
2504d707a8123774791d98b755551a_0
This also enables us to do inference efficiently since our inference time is merely the inference time of two sequential CRF's; in contrast<cite> Finkel et al. (2005)</cite> reported an increase in running time by a factor of 30 over the sequential CRF, with their Gibbs sampling approximate inference.
differences
2504d707a8123774791d98b755551a_2
However, as can be seen from table 2, we find that the consistency constraint does not hold nearly so strictly in this case. A very common case of this in the CoNLL dataset is that of documents containing references to both The China Daily, a newspaper, and China, the country<cite> (Finkel et al., 2005)</cite> .
similarities
2504d707a8123774791d98b755551a_3
At the same time, the simplicity of our two-stage approach keeps inference time down to just the inference time of two sequential CRFs, when compared to approaches such as those of<cite> Finkel et al. (2005)</cite> who report that their inference time with Gibbs sampling goes up by a factor of about 30, compared to the Viterbi algorithm for the sequential CRF.
differences
2504d707a8123774791d98b755551a_4
• Most existing work to capture labelconsistency, has attempted to create all n 2 pairwise dependencies between the different occurrences of an entity,<cite> (Finkel et al., 2005</cite>; Sutton and McCallum, 2004) , where n is the number of occurrences of the given entity.
background
2504d707a8123774791d98b755551a_5
Below, we give some intuition about areas for improvement in existing work and explain how our approach incorporates the improvements. • Most existing work to capture labelconsistency, has attempted to create all n 2 pairwise dependencies between the different occurrences of an entity,<cite> (Finkel et al., 2005</cite>; Sutton and McCallum, 2004) , where n is the number of occurrences of the given entity. This complicates the dependency graph making inference harder.
motivation
2504d707a8123774791d98b755551a_6
• Most work has looked to model non-local dependencies only within a document<cite> (Finkel et al., 2005</cite>; Chieu and Ng, 2002; Sutton and McCallum, 2004; Bunescu and Mooney, 2004) .
background
2504d707a8123774791d98b755551a_7
The simplicity of our approach makes it easy to incorporate dependencies across the whole corpus, which would be relatively much harder to incorporate in approaches like (Bunescu and Mooney, 2004) and<cite> (Finkel et al., 2005)</cite> .
differences
2504d707a8123774791d98b755551a_8
Additionally, our approach makes it possible to do inference in just about twice the inference time with a single sequential CRF; in contrast, approaches like Gibbs Sampling that model the dependencies directly can increase inference time by a factor of 30<cite> (Finkel et al., 2005</cite> ).
differences
2504d707a8123774791d98b755551a_9
We also compare our performance against (Bunescu and Mooney, 2004) and<cite> (Finkel et al., 2005)</cite> and find that we manage higher relative improvement than existing work despite starting from a very competitive baseline CRF.
differences
2504d707a8123774791d98b755551a_10
Recent work looking to directly model non-local dependencies and do approximate inference are that of Bunescu and Mooney (2004) , who use a Relational Markov Network (RMN) (Taskar et al., 2002) to explicitly model long-distance dependencies, Sutton and McCallum (2004) , who introduce skip-chain CRFs, which add additional non-local edges to the underlying CRF sequence model (which Bunescu and Mooney (2004) lack) and<cite> Finkel et al. (2005)</cite> who hand-set penalties for inconsistency in labels based on the training data and then use Gibbs Sampling for doing approximate inference where the goal is to obtain the label sequence that maximizes the product of the CRF objective function and their penalty.
background
2504d707a8123774791d98b755551a_11
The approach of<cite> Finkel et al. (2005)</cite> makes it possible a to model a broader class of longdistance dependencies than Sutton and McCallum (2004) , because they do not need to make any initial assumptions about which nodes should be connected and they too model dependencies between whole token sequences representing entities and between entity token sequences and their token supersequences that are entities.
background
250a88831a4911f76acca3c9d318de_0
The algorithm, which is an extension of<cite> Sassano's (2004)</cite> , allows us to chunk morphemes into base phrases and decide dependency relations of the phrases in a strict left-toright manner.
uses
250a88831a4911f76acca3c9d318de_1
A bunsetsu is a base phrasal unit and consists of one or more content words followed by zero or more function words. In addition, most of algorithms of Japanese dependency parsing, e.g., (Sekine et al., 2000;<cite> Sassano, 2004)</cite> , assume the three constraints below. (1) Each bunsetsu has only one head except the rightmost one. (2) Dependency links between bunsetsus go from left to right. (3) Dependency links do not cross one another.
background
250a88831a4911f76acca3c9d318de_2
Most of the modern dependency parsers for Japanese require bunsetsu chunking (base phrase chunking) before dependency parsing (Sekine et al., 2000; Kudo and Matsumoto, 2002;<cite> Sassano, 2004)</cite> .
background
250a88831a4911f76acca3c9d318de_3
The algorithm that we propose is based on<cite> (Sassano, 2004)</cite> , which is considered to be a simple form of shift-reduce parsing.
uses
250a88831a4911f76acca3c9d318de_4
The flow of the algorithm, which has the same structure as<cite> Sassano's (2004)</cite> , is controlled with a stack that holds IDs for modifier morphemes.
uses
250a88831a4911f76acca3c9d318de_5
See<cite> (Sassano, 2004)</cite> for further details.
background
250a88831a4911f76acca3c9d318de_6
We have designed rather simple features based on the common feature set (Uchimoto et al., 1999; Kudo and Matsumoto, 2002;<cite> Sassano, 2004)</cite> for bunsetsu-based parsers.
uses
250a88831a4911f76acca3c9d318de_7
The system with the previous method employs the algorithm<cite> (Sassano, 2004</cite> ) with the voted perceptron.
extends uses
250a88831a4911f76acca3c9d318de_8
We implemented a parser that employs the algorithm of<cite> (Sassano, 2004)</cite> with the commonly used features and runs with VP instead of SVM, which <cite>Sassano (2004)</cite> originally used.
uses
250a88831a4911f76acca3c9d318de_9
To enable us to compare them we gave bunsetsu chunked sentences by our parser to the parser of<cite> (Sassano, 2004)</cite> in the Kyoto University Corpus. And then we received results from the parser of<cite> (Sassano, 2004)</cite> , which are bunsetsu-based dependency structures, and converted them to morpheme-based structures that follow the scheme we propose in this paper.
extends uses
250a88831a4911f76acca3c9d318de_10
We implemented a parser that employs the algorithm of<cite> (Sassano, 2004)</cite> with the commonly used features and runs with VP instead of SVM, which <cite>Sassano (2004)</cite> originally used. His parser, which cannot do bunsetsu chunking, accepts only a chunked sentence and then produces a bunsetsu-based dependency structure. Thus we cannot directly compare results with ours.
motivation differences
253d635829c733309bb49fc1fcc1cd_0
Automatic detection of fake from legitimate news in different formats such as headlines, tweets and full news articles has been approached in recent Natural Language Processing literature (Vlachos and Riedel, 2014; Vosoughi, 2015; Jin et al., 2016; Rashkin et al., 2017;<cite> Wang, 2017</cite>; Pomerleau and Rao, 2017; Thorne et al., 2018) .
background
253d635829c733309bb49fc1fcc1cd_1
The Liar dataset<cite> (Wang, 2017)</cite> is the first large dataset collected through reliable annotation, but it contains only short statements.
background
253d635829c733309bb49fc1fcc1cd_2
These methods have been used for fake news detection in previous work (Rashkin et al., 2017;<cite> Wang, 2017</cite> fore, we use this model to demonstrate how a classifier trained on data labeled according to publisher's reputation would identify misinformative news articles.
uses background
25e03048cd34685cec34754bdade4e_0
Generative models defining joint distributions over parse trees and sentences are good theoretical models for interpreting natural language data, and appealing tools for tasks such as parsing, grammar induction and language modeling (Collins, 1999; Henderson, 2003; Titov and Henderson, 2007; Petrov and Klein, 2007;<cite> Dyer et al., 2016)</cite> .
background
25e03048cd34685cec34754bdade4e_1
Generative models defining joint distributions over parse trees and sentences are good theoretical models for interpreting natural language data, and appealing tools for tasks such as parsing, grammar induction and language modeling (Collins, 1999; Henderson, 2003; Titov and Henderson, 2007; Petrov and Klein, 2007;<cite> Dyer et al., 2016)</cite> . However, they often impose strong independence assumptions which restrict the use of arbitrary features for effective disambiguation. Moreover, generative parsers are typically trained by maximizing the joint probability of the parse tree and the sentence-an objective that only indirectly relates to the goal of parsing. In this work, we propose a parsing and language modeling framework that marries a generative model with a discriminative recognition algorithm in order to have the best of both worlds.
motivation
25e03048cd34685cec34754bdade4e_2
We showcase the framework using Recurrent Neural Network Grammars (RNNGs;<cite> Dyer et al. 2016</cite> ), a recently proposed probabilistic model of phrase-structure trees based on neural transition systems.
uses
25e03048cd34685cec34754bdade4e_3
In this section we briefly describe Recurrent Neural Network Grammars (RNNGs;<cite> Dyer et al. 2016</cite> ), a top-down transition-based algorithm for parsing and generation.
uses
25e03048cd34685cec34754bdade4e_4
Specifically, we use the following features: 1) the stack embedding d t which encodes the stack of the decoder and is obtained with a stack-LSTM (Dyer et al., 2015 <cite>(Dyer et al., , 2016</cite> ; 2) the output buffer embedding o t ; we use a standard LSTM to compose the output buffer and o t is represented as the most recent state of the LSTM; and 3) the parent non-terminal embedding n t which is accessible in the generative model because the RNNG employs a depth-first generation order.
uses
25e03048cd34685cec34754bdade4e_5
6 See § 4 and Appendix A for comparison between this objective and the importance sampler of<cite> Dyer et al. (2016</cite>
uses
25e03048cd34685cec34754bdade4e_6
7 Another way of computing p(x) (without lower bounding) would be to use the variational approximation q(a|x) as the proposal distribution as in the importance sampler of<cite> Dyer et al. (2016)</cite> .
uses
25e03048cd34685cec34754bdade4e_9
Further connections can be drawn with the importance-sampling based inference of<cite> Dyer et al. (2016)</cite> .
uses
25e03048cd34685cec34754bdade4e_10
To find the MAP parse tree argmax a p(a, x) (where p(a, x) is used rank the output of q(a|x)) and to compute the language modeling perplexity (where a ∼ q(a|x)), we collect 100 samples from q(a|x), same as<cite> Dyer et al. (2016)</cite> .
uses
25e03048cd34685cec34754bdade4e_12
methods for parsing, ranking approximated MAP trees from q(a|x) with respect to p(a, x) yields a small improvement, as in<cite> Dyer et al. (2016)</cite> .
uses
25e03048cd34685cec34754bdade4e_13
It is worth noting that our parsing performance lags behind<cite> Dyer et al. (2016)</cite> .
differences
25e03048cd34685cec34754bdade4e_14
While<cite> Dyer et al. (2016)</cite> use an LSTM as the syntactic composition function of each subtree, we adopt a rather simple composition function based on embedding averaging, which gains computational efficiency but loses accuracy.
differences
25e03048cd34685cec34754bdade4e_15
On language modeling, our framework achieves lower perplexity compared to<cite> Dyer et al. (2016)</cite> and baseline models.
differences
25e03048cd34685cec34754bdade4e_16
However, we acknowledge a subtle difference between<cite> Dyer et al. (2016)</cite> and our approach compared to baseline language models: while the latter incrementally estimate the next word probability, our approach<cite> (and Dyer et al. 2016</cite> ) directly assigns probability to the entire sentence.
similarities differences
25e03048cd34685cec34754bdade4e_17
Overall, the advantage of our framework compared to<cite> Dyer et al. (2016)</cite> is that it opens an avenue to unsupervised training.
differences
25e03048cd34685cec34754bdade4e_18
In the future, we would like to perform grammar induction based on Equation (8), with gradient descent and posterior regularization techniques (Ganchev et al., 2010 A Comparison to Importance Sampling<cite> (Dyer et al., 2016)</cite> In this appendix we highlight the connections between importance sampling and variational inference, thereby comparing our method with<cite> Dyer et al. (2016)</cite> .
uses
25e03048cd34685cec34754bdade4e_19
As shown in Rubinstein and Kroese (2008) , the optimal choice of the proposal distribution is in fact the true posterior p(a|x), in which case the importance weight p(a,x) p(a|x) = p(x) is constant with respect to a. In<cite> Dyer et al. (2016)</cite> , the proposal distribution depends on x, i.e., q(a) q(a|x), and is computed with a separately-trained, discriminative model.
background
25e03048cd34685cec34754bdade4e_20
As shown in Rubinstein and Kroese (2008) , the optimal choice of the proposal distribution is in fact the true posterior p(a|x), in which case the importance weight p(a,x) p(a|x) = p(x) is constant with respect to a. In<cite> Dyer et al. (2016)</cite> , the proposal distribution depends on x, i.e., q(a) q(a|x), and is computed with a separately-trained, discriminative model. This proposal choice is close to optimal, since in a fully supervised setting a is also observed and the discriminative model can be trained to approximate the true posterior well. We hypothesize that the performance of their importance sampler is dependent on this specific proposal distribution.
motivation
260489da0fb3f7a201a6a1cce8f03b_0
End-to-end neural machine translation (NMT) is a newly proposed paradigm for machine translation [Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014;<cite> Bahdanau et al., 2015]</cite> .
background
260489da0fb3f7a201a6a1cce8f03b_1
While early NMT models encode a source sentence as a fixed-length vector, <cite>Bahdanau et al. [2015]</cite> advocate the use of attention in NMT.
background
260489da0fb3f7a201a6a1cce8f03b_2
Such an attentional mechanism has proven to be an effective technique in text generation tasks such as machine translation<cite> [Bahdanau et al., 2015</cite>; Luong et al., 2015] and image caption generation [Xu et al., 2015] . The encoder-decoder framework [Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014;<cite> Bahdanau et al., 2015]</cite> usually uses a recurrent neural network (RNN) to encode the source sentence into a sequence of hidden states h = h 1 , . . . , h m , . . . , h M : where h m is the hidden state of the m-th source word and f is a non-linear function.
background
260489da0fb3f7a201a6a1cce8f03b_3
For example, <cite>Bahdanau et al. [2015]</cite> use a bidirectional RNN and concatenate the forward and backward states as the hidden state of a source word to capture both forward and backward contexts.
background