id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
16780bd3c2b350f6d61f2f55f9f88c_0
For our study, we use a small corpus of Enron email threads which has been previously annotated with dialog acts <cite>(Hu et al., 2009</cite> ).
uses
16780bd3c2b350f6d61f2f55f9f88c_1
An utterance has one of 5 dialog acts: RequestAction, RequestInformation, Inform, Commit and Conventional (see <cite>(Hu et al., 2009</cite> ) for details).
background
16780bd3c2b350f6d61f2f55f9f88c_2
We use the manual gold dialog act annotations present in our corpus, which use a very small dialog act tag set. An utterance has one of 5 dialog acts: RequestAction, RequestInformation, Inform, Commit and Conventional (see <cite>(Hu et al., 2009</cite> ) for details).
uses background
16780bd3c2b350f6d61f2f55f9f88c_3
We instead use the DA tagger of<cite> Hu et al. (2009)</cite> , which we re-trained using the training sets for each of our cross validation folds, applying it to the test set of that fold.
uses
17252628fa9c03c2fe0b44763fc7a2_0
Syntax-based pre-ordering by employing constituent parsing have demonstrated effectiveness in many language pairs, such as English-French (Xia and McCord, 2004) , German-English (Collins et al., 2005) , Chinese-English <cite>(Wang et al., 2007</cite>; Zhang et al., 2008) , and English-Japanese (Lee et al., 2010) .
background
17252628fa9c03c2fe0b44763fc7a2_1
The pre-ordering rules can be made manually (Collins et al., 2005;<cite> Wang et al., 2007</cite>; Han et al., 2012) or extracted automatically from a parallel corpus (Xia and McCord, 2004; Habash, 2007; Zhang et al., 2007; Wu et al., 2011) .
background
17252628fa9c03c2fe0b44763fc7a2_2
Since dependency parsing is more concise than constituent parsing in describing sentences, some research has used dependency parsing in pre-ordering approaches for language pairs such as Arabic-English (Habash, 2007) , and English-SOV languages (Xu et al., 2009; Katz-Brown et al., 2011) . The pre-ordering rules can be made manually (Collins et al., 2005;<cite> Wang et al., 2007</cite>; Han et al., 2012) or extracted automatically from a parallel corpus (Xia and McCord, 2004; Habash, 2007; Zhang et al., 2007; Wu et al., 2011) .
background
17252628fa9c03c2fe0b44763fc7a2_3
Since dependency parsing is more concise than constituent parsing in describing sentences, some research has used dependency parsing in pre-ordering approaches for language pairs such as Arabic-English (Habash, 2007) , and English-SOV languages (Xu et al., 2009; Katz-Brown et al., 2011) . The pre-ordering rules can be made manually (Collins et al., 2005;<cite> Wang et al., 2007</cite>; Han et al., 2012) or extracted automatically from a parallel corpus (Xia and McCord, 2004; Habash, 2007; Zhang et al., 2007; Wu et al., 2011) . The purpose of this paper is to introduce a novel dependency-based pre-ordering approach through creating a pre-ordering rule set and applying it to the Chinese-English PBSMT system.
motivation
17252628fa9c03c2fe0b44763fc7a2_4
Experiment results showed that our pre-ordering rule set improved the BLEU score on the NIST 2006 evaluation data by 1.61. Moreover, this rule set substantially decreased the total times of rule application about 60%, compared with a constituent-based approach<cite> (Wang et al., 2007)</cite> .
differences
17252628fa9c03c2fe0b44763fc7a2_5
The most similar work to this paper is that of<cite> Wang et al. (2007)</cite> .
similarities
17252628fa9c03c2fe0b44763fc7a2_6
We argue that even though the rules by<cite> Wang et al. (2007)</cite> exist, it is almost impossible to automatically convert their rules into rules that are applicable to dependency parsers.
motivation
17252628fa9c03c2fe0b44763fc7a2_7
The most similar work to this paper is that of<cite> Wang et al. (2007)</cite> . They created a set of preordering rules for constituent parsers for ChineseEnglish PBSMT. In contrast, we propose a set of pre-ordering rules for dependency parsers.
similarities differences
17252628fa9c03c2fe0b44763fc7a2_8
We used the MOSES PBSMT system in our experiments. The training data, which included those data used in<cite> Wang et al. (2007)</cite> , contained 1 million pairs of sentences extracted from the Linguistic Data Consortium's parallel news corpora. Our development set was the official NIST MT evaluation data from 2002 to 2005, consisting of 4476 Chinese-English sentences pairs.
differences
17252628fa9c03c2fe0b44763fc7a2_9
We used the MOSES PBSMT system in our experiments. The training data, which included those data used in<cite> Wang et al. (2007)</cite> , contained 1 million pairs of sentences extracted from the Linguistic Data Consortium's parallel news corpora.
similarities
17252628fa9c03c2fe0b44763fc7a2_10
We implemented the constituent-based preordering rule set in<cite> Wang et al. (2007)</cite> for comparison, which is called WR07 below.
uses
17252628fa9c03c2fe0b44763fc7a2_11
Similar to<cite> Wang et al. (2007)</cite> , we carried out human evaluations to assess the accuracy of our dependency-based pre-ordering rules by employing the system "OUR DEP 2" in Table 1 .
uses similarities
17252628fa9c03c2fe0b44763fc7a2_12
The overall accuracy of this rule set is 60.0%, which is almost at the same level as the WR07 rule set (62.1%), according to the similar evaluation (200 sentences and one annotator) conducted in<cite> Wang et al. (2007)</cite> .
similarities
17252628fa9c03c2fe0b44763fc7a2_13
Notice that some of the incorrect pre-orderings may be caused by erroneous parsing as also suggested by<cite> Wang et al. (2007)</cite> .
similarities
17d44521cfdd351d29b4e5f80d41cd_0
Transition-based dependency parsing (Yamada and Matsumoto, 2003; Nivre et al., 2006b; Zhang and Clark, 2008;<cite> Huang and Sagae, 2010</cite> ) utilize a deterministic shift-reduce process for making structural predictions.
background
17d44521cfdd351d29b4e5f80d41cd_1
In the aspect of decoding, beam-search (Johansson and Nugues, 2007; Zhang and Clark, 2008; Huang et al., 2009 ) and partial dynamic-programming<cite> (Huang and Sagae, 2010)</cite> have been applied to improve upon greedy one-best search, and positive results were reported.
background
17d44521cfdd351d29b4e5f80d41cd_2
Recent research have focused on action sets that build projective dependency trees in an arc-eager (Nivre et al., 2006b; Zhang and Clark, 2008) or arc-standard (Yamada and Matsumoto, 2003;<cite> Huang and Sagae, 2010)</cite> process.
background
17d44521cfdd351d29b4e5f80d41cd_3
Recent research have focused on action sets that build projective dependency trees in an arc-eager (Nivre et al., 2006b; Zhang and Clark, 2008) or arc-standard (Yamada and Matsumoto, 2003;<cite> Huang and Sagae, 2010)</cite> process. We adopt the arc-eager system 1 , for which the actions are: • Shift, which removes the front of the queue and pushes it onto the top of the stack; • Reduce, which pops the top item off the stack; • LeftArc, which pops the top item off the stack, and adds it as a modifier to the front of the queue; • RightArc, which removes the front of the queue, pushes it onto the stack and adds it as a modifier to the top of the stack.
uses
17d44521cfdd351d29b4e5f80d41cd_4
These features are mostly taken from Zhang and Clark (2008) and<cite> Huang and Sagae (2010)</cite> , and our parser reproduces the same accuracies as reported by both papers.
similarities uses
17d44521cfdd351d29b4e5f80d41cd_5
2 Following<cite> Huang and Sagae (2010)</cite>, we assign POS-tags to the training data using ten-way jackknifing.
uses
17d44521cfdd351d29b4e5f80d41cd_6
Table 4 shows the final test results of our parser for English. We include in the table results from the pure transition-based parser of Zhang and Clark (2008) (row 'Z&C08 transition'), the dynamic-programming arc-standard parser of<cite> Huang and Sagae (2010)</cite> (row 'H&S10'), and graphbased models including MSTParser (McDonald and Pereira, 2006) , the baseline feature parser of Koo et al. (2008) (row 'K08 baeline') , and the two models of Koo and Collins (2010 ing the highest attachment score reported for a transition-based parser, comparable to those of the best graph-based parsers.
uses
17d44521cfdd351d29b4e5f80d41cd_7
Table 5 shows the results of our final parser, the pure transition-based parser of Zhang and Clark (2008) , and the parser of<cite> Huang and Sagae (2010)</cite> on Chinese.
uses
17d44521cfdd351d29b4e5f80d41cd_8
Table 5 shows the results of our final parser, the pure transition-based parser of Zhang and Clark (2008) , and the parser of<cite> Huang and Sagae (2010)</cite> on Chinese. Our scores for this test set are the best reported so far and significantly better than the previous systems.
differences
17d44521cfdd351d29b4e5f80d41cd_9
The effect of the new features appears to outweigh the effect of combining transition-based and graph-based models, reported by Zhang and Clark (2008) , as well as the effect of using dynamic programming, as in-<cite> Huang and Sagae (2010)</cite> .
differences
17eb0ea80e5a2f18096ef41521af4e_0
Our work tries to learn the main concepts making up the template structure in domain summaries, similar to <cite>(Chambers and Jurafsky, 2011)</cite> .
similarities
17eb0ea80e5a2f18096ef41521af4e_1
Our work demonstrates the possibility of learning conceptual information in several domains and languages, while previous work <cite>(Chambers and Jurafsky, 2011)</cite> has addressed sets of related domains (e.g., MUC-4 templates) in English.
differences
188f10a5b78a5e691e10d180dfde6f_0
Various NLP tasks have benefited from domain adaptation techniques, including part-ofspeech tagging (Blitzer et al., 2006; Huang and Yates, 2010a) , chunking (Daumé III, 2007;<cite> Huang and Yates, 2009)</cite> , named entity recognition (Guo et al., 2009; Turian et al., 2010) , dependency parsing (Dredze et al., 2007; Sagae and Tsujii, 2007) and semantic role labeling (Dahlmeier and Ng, 2010; Huang and Yates, 2010b) .
background
188f10a5b78a5e691e10d180dfde6f_1
A number of techniques have been developed in the literature to tackle the problem of cross-domain feature divergence and feature sparsity, including clustering based word representation learning methods<cite> (Huang and Yates, 2009</cite>; Candito et al., 2011) , word embedding based representation learning methods (Turian et al., 2010; Hovy et al., 2015) and some other representation learning methods (Blitzer et al., 2006) .
background
188f10a5b78a5e691e10d180dfde6f_2
The proposed approach is closely related to the clustering based method<cite> (Huang and Yates, 2009</cite> ) as we both use latent state representations as generalizable features.
similarities uses
188f10a5b78a5e691e10d180dfde6f_3
For example,<cite> Huang and Yates (2009)</cite> used the discrete hidden state of a word under HMMs as augmenting features for cross-domain POS tagging and NP chunking.
background
188f10a5b78a5e691e10d180dfde6f_4
Previous works have demonstrated the usefulness of discrete hidden states induced from a HMM on addressing feature sparsity in domain adaptation<cite> (Huang and Yates, 2009</cite> ).
background
188f10a5b78a5e691e10d180dfde6f_5
We used the same experimental datasets as in<cite> (Huang and Yates, 2009</cite> ) for cross-domain POS tagging from Wall Street Journal (WSJ) domain (Marcus et al., 1993) to MED-LINE domain (PennBioIE, 2005) and for crossdomain NP chunking from CoNLL shared task dataset (Tjong et al., 2000) to Open American National Corpus (OANC) (Reppen et al., 2005) .
uses similarities
18a44fac8d2f450aee62fc15c00c6f_0
One can also assign interpretations; for example, <cite>[27]</cite> argue their LAS self-attention heads are differentiated phoneme detectors.
background
18a44fac8d2f450aee62fc15c00c6f_1
Hybrid self-attention/LSTM encoders were studied in the context of listenattend-spell (LAS) <cite>[27]</cite> , and the Transformer was directly adapted to speech in [19, 28, 29] ; both are encoder-decoder systems.
background
18a44fac8d2f450aee62fc15c00c6f_2
Unlike past works, we do not require convolutional frontends [19] or interleaved recurrences <cite>[27]</cite> to train self-attention for ASR.
differences
18a44fac8d2f450aee62fc15c00c6f_3
Wide contexts also enable incorporation of noise/speaker contexts, as <cite>[27]</cite> suggest regarding the broad-context attention heads in the first layer of their self-attentional LAS model.
background
18a44fac8d2f450aee62fc15c00c6f_4
Our proposed framework ( Figure 1a ) is built around self-attention layers, as used in the Transformer encoder [22] , previous explorations of self-attention in ASR [19,<cite> 27]</cite> , and defined in Section 2.3.
similarities uses
18a44fac8d2f450aee62fc15c00c6f_5
A convolutional frontend is a typical downsampling strategy [8, 19] ; however, we leave integrating other layer types into SAN-CTC as future work. Instead, we consider three fixed approaches, from least-to most-preserving of the input data: subsampling, which only takes every k-th frame; pooling, which aggregates every k consecutive frames via a statistic (average, maximum); reshaping, where one concatenates k consecutive frames into one <cite>[27]</cite> .
extends differences
18a44fac8d2f450aee62fc15c00c6f_6
The latter was found necessary for self-attentional LAS <cite>[27]</cite> , as additive encodings did not give convergence.
differences
18a44fac8d2f450aee62fc15c00c6f_7
We see that unlike self-attentional LAS <cite>[27]</cite> , SAN-CTC works respectably even with no position en- coding; in fact, the contribution of position is relatively minor (compare with [21] , where location in an encoder-decoder system improved CER by 3% absolute).
differences
18a44fac8d2f450aee62fc15c00c6f_8
Inspired by <cite>[27]</cite> , we plot the standard deviation of attention weights for each head as training progresses; see Figure 2 for details.
similarities uses
18a44fac8d2f450aee62fc15c00c6f_9
In the first layers, we similarly observe a differentiation of variances, along with wide-context heads; in later layers, unlike <cite>[27]</cite> we still see mild differentiation of variances.
differences
193d388c3f4c346cb62711f3f04c0f_0
State-of-the-art deep neural networks leverage task-specific architectures to develop hierarchical representations of their input, with each layer building a refined abstraction of the layer that came before it<cite> (Conneau et al., 2016)</cite> .
background
193d388c3f4c346cb62711f3f04c0f_1
State-of-the-art deep neural networks leverage task-specific architectures to develop hierarchical representations of their input, with each layer building a refined abstraction of the layer that came before it<cite> (Conneau et al., 2016)</cite> . In a departure from this philosophy, we propose a divide-and-conquer approach, where a team of readers each focus on different aspects of the text, and then combine their representations to make a joint decision.
differences
193d388c3f4c346cb62711f3f04c0f_2
Compared to deep Convolutional Networks (CNN) for text (Zhang et al., 2015; <cite>Conneau et al., 2016)</cite> , the MVN strategy emphasizes network width over depth.
differences
193d388c3f4c346cb62711f3f04c0f_3
That is, we replace Equation 5 with v i = s<cite> (Conneau et al., 2016)</cite>
uses
193d388c3f4c346cb62711f3f04c0f_4
The AG corpus (Zhang et al., 2015; <cite>Conneau et al., 2016)</cite> contains categorized news articles from more than 2,000 news outlets on the web.
background
193d388c3f4c346cb62711f3f04c0f_5
The AG corpus (Zhang et al., 2015; <cite>Conneau et al., 2016)</cite> contains categorized news articles from more than 2,000 news outlets on the web. A random sample of the training set was used for hyper-parameter tuning.
uses
193d388c3f4c346cb62711f3f04c0f_6
These results show that the bag-of-words MVN outperforms the state-of-theart accuracy obtained by the non-neural n-gram TFIDF approach (Zhang et al., 2015) , as well as several very deep CNNs<cite> (Conneau et al., 2016)</cite> .
differences
197b557d7b5c7c2d195be84990719b_0
For example, <cite>Mikolov et al. (2013)</cite> utilize Skipgram NegativeSampling (SGNS) to train word embeddings using word-context pairs formed from windows moving across a text corpus.
background
197b557d7b5c7c2d195be84990719b_1
We are interested in modifying the Skipgram Negative-Sampling (SGNS) objective in<cite> (Mikolov et al., 2013)</cite> to utilize document-wide feature vectors while simultaneously learning continuous document weights loading onto topic vectors.
extends
197b557d7b5c7c2d195be84990719b_2
Each word is represented with a fixedlength dense distributed-representation vector, but unlike <cite>Mikolov et al. (2013)</cite> the same word vectors are used in both the pivot and target representations.
differences
197b557d7b5c7c2d195be84990719b_3
As in <cite>Mikolov et al. (2013)</cite> , pairs of pivot and target words (j, i) are extracted when they cooccur in a moving window scanning across the corpus.
uses
197b557d7b5c7c2d195be84990719b_4
Unless stated otherwise, the negative sampling power beta is set to 3/4 and the number of negative samples is fixed to n = 15 as in <cite>Mikolov et al. (2013)</cite> .
uses
197b557d7b5c7c2d195be84990719b_5
<cite>Mikolov et al. (2013)</cite> provide the intuition that word vectors can be summed together to form a semantically meaningful combination of both words.
background
197b557d7b5c7c2d195be84990719b_6
Word vectors are initialized to the pretrained values found in <cite>Mikolov et al. (2013)</cite> but otherwise updates are allowed to these vectors at training time.
uses
197b557d7b5c7c2d195be84990719b_7
The first, which we Figure 5 demonstrates that token similarities are learned in a similar fashion as in SGNS<cite> (Mikolov et al., 2013</cite> ) but specialized to the Hacker News corpus.
similarities
197b557d7b5c7c2d195be84990719b_8
This work demonstrates a simple model, lda2vec, that extends SGNS<cite> (Mikolov et al., 2013)</cite> to build unsupervised document representations that yield coherent topics.
extends
19b647ab74d28b59b7df2be729b2d7_0
The style of an utterance can be altered based on requirements; introducing elements of sarcasm, or aspects of factual and emotional argumentation styles<cite> [15,</cite> 14] .
background
19b647ab74d28b59b7df2be729b2d7_1
By using machine learning models designed to classify different classes of interest, such as sentiment, sarcasm, and topic, data can be bootstrapped to greatly increase the amount of data available for indexing and utterance selection <cite>[15]</cite> .
background
1ab7893c2a930bc5af3c34a5912dd2_0
Of the two state-of-the-art approaches on dialog act recognition, one uses a deep stack of Recurrent Neural Networks (RNNs) (Schmidhuber, 1990) to capture long distance relations between tokens (Khanpour et al., 2016) , while the other uses multiple parallel temporal Convolutional Neural Networks (CNNs) (Fukushima, 1980) to capture relevant functional patterns with different length <cite>(Liu et al., 2017)</cite> .
background
1ab7893c2a930bc5af3c34a5912dd2_1
Thus, only speaker information that is directly related to the dialog, such as turn-taking <cite>(Liu et al., 2017)</cite> , is typically considered.
background
1ab7893c2a930bc5af3c34a5912dd2_2
Concerning information from the surrounding segments, its influence, especially that of preceding segments, has been thoroughly explored in at least two studies (Ribeiro et al., 2015; <cite>Liu et al., 2017)</cite> .
background
1ab7893c2a930bc5af3c34a5912dd2_3
On the convolutional side, <cite>Liu et al. (2017)</cite> generated the segment representation by combining the outputs of three parallel CNNs with different context window sizes, in order to capture different functional patterns.
background
1ab7893c2a930bc5af3c34a5912dd2_4
<cite>Liu et al. (2017)</cite> used 200-dimensional Word2Vec embeddings trained on Facebook data.
background
1ab7893c2a930bc5af3c34a5912dd2_5
Still, Khanpour et al. (2016) reported 73.9% accuracy on the validation set and 80.1% on the test set, while <cite>Liu et al. (2017)</cite> reported 74.5% and 76.9% accuracy on the two sets used to evaluate <cite>their experiments</cite>.
background
1ab7893c2a930bc5af3c34a5912dd2_6
Additionally, <cite>Liu et al. (2017)</cite> explored the use of context information concerning speaker changes and from the surrounding segments.
background
1ab7893c2a930bc5af3c34a5912dd2_7
Additionally, <cite>Liu et al. (2017)</cite> explored the use of context information concerning speaker changes and from the surrounding segments. The first was provided as a flag and concatenated to the segment representation. Concerning the latter, <cite>they explored</cite> the use of discourse models, as well as of approaches that concatenated the context information directly to the segment representation.
background
1ab7893c2a930bc5af3c34a5912dd2_8
Additionally, <cite>Liu et al. (2017)</cite> explored the use of context information concerning speaker changes and from the surrounding segments. The first was provided as a flag and concatenated to the segment representation. Concerning the latter, <cite>they explored</cite> the use of discourse models, as well as of approaches that concatenated the context information directly to the segment representation. <cite>The discourse models</cite> transform the model into a hierarchical one by generating a sequence of dialog act classifications from the sequence of segment representations.
background
1ab7893c2a930bc5af3c34a5912dd2_9
Additionally, <cite>Liu et al. (2017)</cite> explored the use of context information concerning speaker changes and from the surrounding segments. The first was provided as a flag and concatenated to the segment representation. Concerning the latter, <cite>they explored</cite> the use of discourse models, as well as of approaches that concatenated the context information directly to the segment representation. <cite>The discourse models</cite> transform the model into a hierarchical one by generating a sequence of dialog act classifications from the sequence of segment representations. Thus, when predicting the classification of a segment, the surrounding ones are also taken into account. However, when the <cite>discourse model</cite> is based on a CNN or a bidirectional LSTM unit, it considers information from future segments, which is not available for a dialog system.
background
1ab7893c2a930bc5af3c34a5912dd2_10
Additionally, <cite>Liu et al. (2017)</cite> explored the use of context information concerning speaker changes and from the surrounding segments. The first was provided as a flag and concatenated to the segment representation. Concerning the latter, <cite>they explored</cite> the use of discourse models, as well as of approaches that concatenated the context information directly to the segment representation. <cite>The discourse models</cite> transform the model into a hierarchical one by generating a sequence of dialog act classifications from the sequence of segment representations. Thus, when predicting the classification of a segment, the surrounding ones are also taken into account. However, when the <cite>discourse model</cite> is based on a CNN or a bidirectional LSTM unit, it considers information from future segments, which is not available for a dialog system. Still, even when relying on future information, the approaches based on <cite>discourse models</cite> performed worse than those that concatenated the context information directly to the segment representation.
background
1ab7893c2a930bc5af3c34a5912dd2_11
In this sense, similarly to our previous study using SVMs (Ribeiro et al., 2015) , <cite>Liu et al. (2017)</cite> concluded that providing that information in the form of the classification of the surrounding segments leads to better results than using <cite>their words</cite>, even when those classifications are obtained automatically.
background
1ab7893c2a930bc5af3c34a5912dd2_12
Using the setup with gold standard labels from three preceding segments, <cite>Liu et al. (2017)</cite> achieved 79.6% and 81.8% on the two sets used to evaluate the approach.
background
1ab7893c2a930bc5af3c34a5912dd2_13
The resulting word embeddings are 200-dimensional as in the study by <cite>Liu et al. (2017)</cite> .
similarities
1ab7893c2a930bc5af3c34a5912dd2_14
This is a dense layer which maps the segment representations into a 100-dimensional space, as in the study by <cite>Liu et al. (2017)</cite> .
similarities
1ab7893c2a930bc5af3c34a5912dd2_15
As stated in Section 3, of the two state-of-the-art approaches on dialog act recognition, one uses a RNN-based approach (Khanpour et al., 2016) for segment representation, while the other uses one based on CNNs <cite>(Liu et al., 2017)</cite> .
background
1ab7893c2a930bc5af3c34a5912dd2_16
As described in Section 3, the convolutional approach by <cite>Liu et al. (2017)</cite> uses a set of parallel temporal CNNs with different window size, each followed by a max pooling operation.
background
1ab7893c2a930bc5af3c34a5912dd2_17
To achieve the results presented in <cite>their paper</cite>, <cite>Liu et al. (2017)</cite> used three CNNs with 100 filters and 1, 2, and 3 as context window sizes.
background
1ab7893c2a930bc5af3c34a5912dd2_18
As stated in Section 3, Khanpour et al. (2016) explored embedding spaces with dimensionality 75, 150, and 300 together with different embedding approaches. In every case, the embedding space with dimensionality 150 led to the best results. <cite>Liu et al. (2017)</cite> used a different dimensionality value, 200, in <cite>their study</cite>.
background
1ab7893c2a930bc5af3c34a5912dd2_19
Khanpour et al. (2016) used pre-trained embeddings using both approaches in their study and achieved their best results using Word2Vec embeddings trained on Wikipedia data. <cite>Liu et al. (2017)</cite> also used Word2Vec embeddings, but trained on Facebook data.
background
1ab7893c2a930bc5af3c34a5912dd2_20
In <cite>their study</cite>, <cite>Liu et al. (2017)</cite> used pre-trained embeddings but let them adapt to the task during the training phase. However, they did not perform a comparison with the case where the embeddings are not adaptable. Thus, in our study we experimented with both fixed and adaptable embeddings.
motivation
1ab7893c2a930bc5af3c34a5912dd2_22
Starting with the dimensionality of the embedding space, in Table 3 we can see that using an embedding space with 200 dimensions, such as in the study by <cite>Liu et al. (2017)</cite> , leads to better results than any of the dimensionality values used by Khanpour et al. (2016) .
similarities
1ab7893c2a930bc5af3c34a5912dd2_23
for dialog act recognition is the dialog history, with influence decaying with distance (Ribeiro et al., 2015; Lee & Dernoncourt, 2016; <cite>Liu et al., 2017)</cite> .
background
1ab7893c2a930bc5af3c34a5912dd2_24
However, information concerning the speakers and, more specifically, turn-taking has also been proved important <cite>(Liu et al., 2017)</cite> .
background
1ab7893c2a930bc5af3c34a5912dd2_25
<cite>Liu et al. (2017)</cite> further showed that using a single label per segment is better than using the probability of each class.
background
1ab7893c2a930bc5af3c34a5912dd2_26
In our previous study, we have used up to five preceding segments and showed that the gain becomes smaller as the number of preceding segments increases, which supports the claim that the closest segments are the most relevant. <cite>Liu et al. (2017)</cite> stopped at three preceding segments, but noticed a similar pattern.
similarities
1ab7893c2a930bc5af3c34a5912dd2_27
Although both our previous study and that by <cite>Liu et al. (2017)</cite> used the classifications of preceding segments as context information, none of them took into account that those segments have a sequential nature and simply flattened the sequence before appending it to the segment representation.
motivation
1ab7893c2a930bc5af3c34a5912dd2_28
Thus, turn-taking information is relevant for dialog act recognition. In fact, this has been confirmed in the study by <cite>Liu et al. (2017)</cite> . Thus, we also use turn-taking information in this study.
motivation
1ab7893c2a930bc5af3c34a5912dd2_29
Starting with the reproduction of the flat label sequence approach, in Table 9 we can see that the results follow the same pattern as in our previous study and that by <cite>Liu et al. (2017)</cite> .
similarities
1ab7893c2a930bc5af3c34a5912dd2_30
We used adaptations of the approaches with top performance in previous studies, namely the RNN-based approach by Khanpour et al. (2016) and the CNN-based approach by <cite>Liu et al. (2017)</cite> .
extends
1ab7893c2a930bc5af3c34a5912dd2_31
Starting with the typically used word-level, we have shown that using an embedding space with 200 dimensions as used by <cite>Liu et al. (2017)</cite> in <cite>their study</cite> leads to better results than any of the dimensionality values used by Khanpour et al. (2016) .
uses
1ab7893c2a930bc5af3c34a5912dd2_32
In the case of <cite>the study</cite> by <cite>Liu et al. (2017)</cite> , direct result comparison with those reported is not possible since they were obtained on different sets.
differences
1ab7893c2a930bc5af3c34a5912dd2_33
In the case of <cite>the study</cite> by <cite>Liu et al. (2017)</cite> , direct result comparison with those reported is not possible since they were obtained on different sets. However, the result differences between overlapping steps in our experiments are consistent with those described in <cite>their paper</cite>.
similarities differences
1ab7893c2a930bc5af3c34a5912dd2_34
In the case of <cite>the study</cite> by <cite>Liu et al. (2017)</cite> , direct result comparison with those reported is not possible since they were obtained on different sets. However, the result differences between overlapping steps in our experiments are consistent with those described in <cite>their paper</cite>. Thus, we can safely state that <cite>their approach</cite> can be improved by using five parallel CNNs, dependency-based word embeddings, and the summary representation of context information.
similarities differences
1baddfeea7d11fc02cc26ff698a601_0
We have recently introduced a new transdimensional random field (TRF 1 ) LM <cite>[4]</cite> , where the whole sentence is modeled as a random field. As the random field approach avoids local normalization which is required in the conditional approach, it is computationally more efficient in computing sentence probabilities and has the potential advantage of being able to flexibly integrating a richer set of features.
background
1baddfeea7d11fc02cc26ff698a601_1
Improvements: First, in <cite>[4]</cite> , the diagonal elements of the Hessian matrices are online estimated during the SA iterations to rescale the gradients, which is shown to benefit the convergence of the training algorithm.
background motivation