id
stringlengths 11
20
| paper_text
stringlengths 29
163k
| review
stringlengths 666
24.3k
|
---|---|---|
iclr_2018_H1l8sz-AW | Learning rules for neural networks necessarily include some form of regularization. Most regularization techniques are conceptualized and implemented in the space of parameters. However, it is also possible to regularize in the space of functions. Here, we propose to measure networks in an L 2 Hilbert space, and test a learning rule that regularizes the distance a network can travel through L 2 -space each update. This approach is inspired by the slow movement of gradient descent through parameter space as well as by the natural gradient, which can be derived from a regularization term upon functional change. The resulting learning rule, which we call Hilbert-constrained gradient descent (HCGD), is thus closely related to the natural gradient but regularizes a different and more calculable metric over the space of functions. Experiments show that the HCGD is efficient and leads to considerably better generalization. | I have read comments and rebuttal - i do not have the luxury of time to read in depth the revision.
It seems that the authors have made an effort to accommodate reviewers' comments. I upgraded the rating.
-----------------------------------------------------------------------------------------------------------------------
Summary: The paper considers the use of natural gradients for learning. The added twist is the substitution of the KL divergence with the Wasserstein distance, as proposed in GAN training. The authors suggest that Wasserstein regularization improves generalization over SGD with a little extra cost.
The paper is structured as follows:
1. KL divergence is used as a similarity measure between two distributions.
2. Regularizing the objective with KL div. seems promising, but expensive.
3. We usually approximate the KL div. with its 2nd order approximation - this introduces the Hessian of the KL divergence, known as Fisher information matrix.
4. However, computing and inverting the Fisher information matrix is computationally expensive.
5. One solution is to approximate the solution F^{-1} J using gradient descent. However, still we need to calculate F. There are options where F could be formed as the outer product of a collection gradients of individual examples ('empirical Fisher').
6. This paper does not move towards Fisher information, but towards Wasserstein distance: after a "good" initialization via SGD is obtained, the inner loop continues updating that point using the Wasserstein regularized objective.
7. No large matrices need to be formed or inverted, however more passes needed per outer step.
Importance:
Somewhat lack of originality and poor experiments lead to low importance.
Clarity:
The paper needs major revision w.r.t. presenting and highlighting the new main points. E.g., one needs to get to page 5 to understand that the paper is just based on the WGAN ideas in Arjovsky et al., but with a different application (not GANS).
Originality/Novelty:
The paper, based on WGAN motivation, proposes Wasserstein distance regularization over KL div. regularization for training of simple models, such as neural networks. Beyond this, the paper does not provide any futher original idea. So, slight to no novelty.
Main comments:
1. Would the approximation of C_0 by its second-order Taylor expansion (that also introduces a Hessian) help? This would require the combination of two Hessian matrices.
2. Experiments are really demotivating: it is not clear whether using plain SGD or the proposed method leads to better results.
Overall:
Rejection. |
iclr_2018_rJBwoM-Cb | We introduce a novel approach to tree-to-tree learning, the neural tree transducer (NTT), a top-down depth first context-sensitive tree decoder, which is paired with recursive neural encoders. Our method works purely on tree-to-tree manipulations rather than sequence-to-tree or tree-to-sequence and is able to encode and decode multiple depth trees. We compare our method to sequence-to-sequence models applied to serializations of the trees and show that our method outperforms previous methods for tree-to-tree transduction. | The paper introduces a neural tree decoder architecture for binary trees that conditions the next node prediction on
representations of its ascendants (encoded with an LSTM recurrent net) and left sibling subtree (encoded with a binary LSTM recursive net) for right sibling nodes.
To perform tree to tree transduction the input tree is encoded as a vector with a Tree LSTM; correspondences between input and output subtrees are not modelled directly (using e.g. attention) as is done in traditional tree transducers.
While the term context-sensitive should be used with caution, I do accept the claim here, although the notation used does not make the exposition clear.
Experimental results show that the architecture performs better at synthetic tree transduction tasks (relabeling, reordering, deletion) than sequence-to-sequence baselines.
While neural approches to tree-to-tree transduction is an understudied problem, the contributions of this paper are very narrow and it is not shown that the proposed approach will generalize to more expressive models or real-world applications of tree-to-tree transduction.
Existing neural tree decoders, such as Dong and Lapata or Alvarex-Melis and Jaakkola, could be combined with tree LSTM encoders without any technical innovations and could possibly do as well as the proposed model for the transduction tasks tested - no experiments are performed with existing tree-based decoder architectures.
Specific comments per section:
1. Unclear what is meant be "equivalent" in first paragraph.
2. The model does not assign an explicit probability to the tree structure - rather it seems to rely on the distinction between terminal and non-terimal symbols and the restriction to binary trees to know when closing brackets are implied - this is not made clear, and a general model should not have this restriction, as there are many cases where we want to generate non-binary trees.
The production rule notation used is incorrect and confusing, mixing sets with non-terminals and terminal symbols:
A better notation for the rules in 2.1.1 would be something like S -> P | v | \epsilon; P -> Q R | Q u | u Q | u w, where P, Q, R \in O and u, w \in v.
2.1.2. Splitting production rules as ->_left, ->_right is not standard notation. Rather introduce intermediate non-terminals in the grammar:
O -> O_L O_R; O_L -> a | Q, O_R -> b | Q.
2.1.3 The context-sensitively here arise when conditioning on the entire left sibling subtree (not just the top non-terimal).
The rules should have a format such as O -> O_L O_R; O_L -> a | Q; \alpha O_R -> \alpha a | \alpha Q, where \alpha is an entire subtree rooted at O_L.
2.1.4 Should be g(x|.) = exp( ), the softmax function includes the normalization which is done in the equation below.
3. Note that is is possible to restrict the decoder to produce tree structures while keeping a sequential neural architecture. For some tasks sequential decoders do actually produce mostly well-formed trees, given enough training data.
RNNG encodes completed subtrees recursively, and the stack LSTM encodes the entire partially-produced tree, so it does produce and condition on trees not just sequences. The model in this paper is not more expressive than RNNG, it just encodes somewhat different structural biases, which might or might not be suited for real tasks.
4. In the examples given, the same set of symbols are used as both terminals and non-terminals. How is the tree structure then predicted by the decoder?
Details about the training setup are missing: How is the training data generated, what is the size of the trees during training (compared to testing)?
4.2 The steep drop in performance between depth 5 and 6 indicates that model is very sensitive to its memorization capacity and might not be generalizing over the given training data.
For real tree-to-tree applications involving these operations, there is good reason to believe that some kind of attention mechanism will be needed over the input tree during decoding.
Reference should generally be to published proceedings rather than to arxiv where available - e.g. Aharoni and Goldberg, Dong and Lapata, Erguchi et al, Rush et al. For Graehl and Knight there is a published journal paper in Computational Linguistics. |
iclr_2018_r1nmx5l0W | Variational RNNs are proposed to output "creative" sequences. Ideally, a collection of sequences produced by a variational RNN should be of both high quality and high variety. However, existing decoders for variational RNNs suffer from a trade-off between quality and variety. In this paper, we seek to learn a variational RNN that decodes high-quality and high-variety sequences. We propose the Self-Improving Collaborative GAN (SIC-GAN), where there are two generators (variational RNNs) collaborating with each other to output a sequence and aiming to trick the discriminator into believing the sequence is of good quality. By deliberately weakening one generator, we can make another stronger in balancing quality and variety. We conduct experiments using the QuickDraw dataset and the results demonstrate the effectiveness of SIC-GAN empirically. | This paper baffles me. It appears to be a stochastic RNN with skip connections (so it's conditioned on the last two states rather than last one) trained by an adversarial objective (which is no small feat to make work for sequential tasks) with results shown on the firetruck category of the QuickDraw dataset. Yet the authors claim significantly more importance for the work than I think it merits.
First, there is nothing variational about their variational RNN. They seem to use the term to be equivalent to "stochastic", "probabilistic" or "noisy" rather than having anything to do with optimizing a variational bound. To strike the right balance between pretension and accuracy, I would suggest substituting the word "stochastic" everywhere "variational" is used.
Second, there is nothing self-improving or collaborative about their self-improving collaborative GAN. Once the architecture is chosen to share the weights between the weak and strong generator, the only difference between the two is that the weak generator has greater noise at the output. In this sense the architecture should really be seen as a single model with different noise levels at alternating steps. In this sense, I am not entirely clear on what the difference is between the SIC-GAN and their noisy GAN baseline - presumably the only difference is that the noisy GAN is conditioned on a single timestep instead of two at a time? The claim that these models are somehow "self-improving" baffles me as well - all machine learning models are self-improving, that is the point of learning. The authors make a comparison to AlphaGo Zero's use of self-play, but here the weak and strong generators are on the same side of the game, and because there are no game rules provided beyond "reproduce the training set", there is no possibility of discovery beyond what is human-provided, contrary to the authors' claim.
Third, the total absence of mathematical notation made it hard in places to follow exactly what the models were doing. While there are plenty of papers explaining the GAN framework to a novice, at least some clear description of the baseline architectures would be appreciated (for instance, a clearer explanation of how the SIC-GAN differs from the noisy GAN). Also the description of the soft $\ell_1$ loss (which the authors call the "1-loss" for some reason) would benefit from a clearer mathematical exposition.
Fourth, the experiments seem too focused on the firetruck category of the QuickDraw dataset. As it was the only example shown, it's difficult to evaluate their claim that this is a general method for improving variety without sacrificing quality. Their chosen metrics for variety and detail are somewhat subjective, as they depend on the fact that some categories in the QuickDraw dataset resemble firetrucks in the fine detail while others resemble firetrucks in outline. This is not a generalizable metric. Human evaluation of the relative quality and variety would likely suffice.
Lastly, the entire section on the strong-weak collaborative GAN seems to add nothing. They describe an entire training regiment for the model, yet never provide any actual experimental results using that model, so the entire section seems only to motivate the SIC-GAN which, again, seems like a fairly ordinary architectural extension to GANs with RNN generators.
The results presented on QuickDraw do seem nice, and to the best of my knowledge it is the first (or at least best) applications of GANs to QuickDraw - if they refocused the paper on GAN architectures for sketching and provided more generalizable metrics of quality and variety it could be made into a good paper. |
iclr_2018_SkYibHlRb | Synthesizing SQL queries from natural language is a long-standing open problem and has been attracting considerable interest recently. Toward solving the problem, the de facto approach is to employ a sequence-to-sequence-style model. Such an approach will necessarily require the SQL queries to be serialized. Since the same SQL query may have multiple equivalent serializations, training a sequenceto-sequence-style model is sensitive to the choice from one of them. This phenomenon is documented as the "order-matters" problem. Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations. However, we observe that the improvement from reinforcement learning is limited. In this paper, we propose a novel approach, i.e., SQLNet, to fundamentally solve this problem by avoiding the sequence-to-sequence structure when the order does not matter. In particular, we employ a sketch-based approach where the sketch contains a dependency graph so that one prediction can be done by taking into consideration only the previous predictions that it depends on. In addition, we propose a sequence-to-set model as well as the column attention mechanism to synthesize the query based on the sketch. By combining all these novel techniques, we show that SQLNet can outperform the prior art by 9% to 13% on the WikiSQL task. | This paper proposes a neural network-based approach to converting natural language questions to SQL queries. The idea is to use a small grammar to facilitate the process, together making some independence assumptions. It is evaluated on a recently introduced dataset for natural language to SQL.
Pros:
- good problem, NL2SQL is an important task given how dominant SQL is
- incorporating a grammar ("sketch") is a sensible improvement.
Cons:
- The dataset used makes very strong simplification assumptions. Not problem per se, but it is not the most challenging SQL dataset. The ATIS corpus is NL2SQL and much more challenging and realistic:
Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: the ATIS-3 corpus. In Proceedings of the workshop on Human Language Technology (HLT '94). Association for Computational Linguistics, Stroudsburg, PA, USA, 43-48. DOI: https://doi.org/10.3115/1075812.1075823
- In particular, the assumption that every token in the SQL statement is either an SQL keyword or appears in the natural language statement is rather atypical and unrealistic.
- The use of a grammar in the context of semantic parsing is not novel; see this tutorial for many pointers:
http://yoavartzi.com/tutorial/
- As far as I can tell, the set prediction is essentially predicted each element independently, without taking into account any dependencies. Nothing wrong, but also nothing novel, that is what most semantic parsing/semantic role labeling baseline approaches do. The lack of ordering among the edges, doesn't mean they are independent.
- Given the rather constrained type of questions and SQL statements, it would make sense to compare it against approaches for question answering over knowledge-bases:
https://github.com/scottyih/Slides/blob/master/QA%20Tutorial.pdf
While SQL can express much more complex queries, the ones supported by the grammar here are not very different.
- Pasupat and Liang (2015) also split the data to make sure different tables appear only in training, dev, test and they developed their dataset using crowd sourcing.
- The comparison against Dong and Lapata (2016) is not fair because their model is agnostic and thus applicable to 4 datasets while the one presented here is tailored to the dataset due the grammar/sketch used. Also, suggesting that previous methods might not generalize well sounds odd given that the method proposed seems to use much larger datasets.
- Not sure I agree that mixing the same tables across training/dev/test is more realistic. If anything, it assumes more training data and manual annotation every time a new table is added. |
iclr_2018_S1fcY-Z0- | We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks. A Bayesian hypernetwork h is a neural network which learns to transform a simple noise distribution, p( ) = N (0, I), to a distribution q(θ) := q(h( )) over the parameters θ of another neural network (the "primary network"). We train q with variational inference, using an invertible h to enable efficient estimation of the variational lower bound on the posterior p(θ|D) via sampling. In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap iid sampling of q(θ). In practice, Bayesian hypernets provide a better defense against adversarial examples than dropout, and also exhibit competitive performance on a suite of tasks which evaluate model uncertainty, including regularization, active learning, and anomaly detection. | This paper presents Bayesian Hypernetworks; variational Bayesian neural networks where the variational posterior over the weights is governed by a hyper network that implements a normalizing flow (NF) such as RealNVP and IAF. As directly outputting the weight matrix with a hyper network is computationally expensive the authors instead propose to utilize weight normalisation on the weights and then use the hyper network to output scalar scaling variables for each hidden unit, similarly to what was done at [1]. The main difference with this prior work is that [1] consider these NF scaling variables as auxiliary random variables to a mean field Gaussian distribution over the weights whereas this paper attempts to posit a distribution directly on the weights via the NF. This avoids the nested variational approximation and auxiliary models of [1], which can potentially yield a tighter bound. The proposed method is evaluated on extensive experiments.
This paper seems like a plausible idea with extensive experiments but the similarity with [1] make it an incremental contribution and, furthermore, it seems that it has a technical issue with what is explained at Section 3.3. More specifically, if you generate the parameters \theta according to Eq. 7 and posit a prior over \theta then you will have a problematic variational bound as there will be a KL divergence, KL(q(\theta) || p(\theta)), with distributions of different support (since q(\theta) is defined only along the directions spanned by u), which is infinite. For the KL to be valid you will need to posit a prior distribution over `g`, p(g), and then consider KL(q(g) || p(g)), with q(g) being given by the NF. From the experiment paragraph at page 5 though I deduct that you instead employ “an isotropic standard normal prior over the weights”, i.e. \theta, thus I believe that you indeed have a problematic bound. How do you actually compute logq(\theta) when you employ the parametrisation discussed at 3.3? Did you use that parametrisation in every experiment?
Other than that, I believe that it would be interesting to experiment with a `full` hyper network, i.e. generating directly the entire parameter vector \theta, e.g. at the toy regression experiment where the dimensionality is small. This would then better illustrate the tradeoffs you make when you reduce the flexibility of the hyper-network to just outputting the row scaling variables and the effect this has at the posterior approximation.
Typos:
(1) Page 3, 3.1.1 log(\theta) -> logp(\theta).
(2) Eq. 6, it needs to be |det \frac{\partial h(\epsilon)}{\partial \epsilon}|^{-1} or |det \frac{\partial h^{-1}(\theta)}{\partial \theta}| for a valid change of variables formula.
[1] Louizos & Welling, Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. |
iclr_2018_BJ_UL-k0b | Published as a conference paper at ICLR 2018 RECASTING GRADIENT-BASED META-LEARNING AS HIERARCHICAL BAYES
Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm's operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation. | Summary
The paper presents an interesting view on the recently proposed MAML formulation of meta-learning (Finn et al). The main contribution is a) insight into the connection between the MAML procedure and MAP estimation in an equivalent linear hierarchical Bayes model with explicit priors, b) insight into the connection between MAML and MAP estimation in non-linear HB models with implicit priors, c) based on these insights, the paper proposes a variant of MALM using a Laplace approximation (with additional approximations for the covariance matrix. The paper finally provides an evaluation on the mini ImageNet problem without significantly improving on the MAML results on the same task.
Pro:
- The topic is timely and of relevance to the ICLR community continuing a current trend in building meta-learning system for few-shot learning.
- Provides valuable insight into the MAML objective and its relation to probabilistic models
Con:
- The paper is generally well-written but I find (as a non-meta-learner expert) that certain fundamental aspects could have been explained better or in more detail (see below for details).
- The toy example is quite difficult to interpret the first time around and does not provide any empirical insight into the converge of the proposed method (compared to e.g. MAML)
- I do not think the empirical results provide enough evidence that it is a useful/robust method. Especially it does not provide insight into which types of problems (small/large, linear/ non-linear) the method is applicable to.
Detailed comments/questions:
- The use of Laplace approximation is (in the paper) motivated from a probabilistic/Bayes and uncertainty point-of-view. It would, however, seem that the truncated iterations do not result in the approximation being very accurate during optimization as the truncation does not result in the approximation being created at a mode. Could the authors perhaps comment on:
a) whether it is even meaningful to talk about the approximations as probabilistic distribution during the optimization (given the psd approximation to the Hessian), or does it only make sense after convergence?
b) the consequence of the approximation errors on the general convergence of the proposed method (consistency and rate)
- Sec 4.1, p5: Last equation: Perhaps useful to explain the term $log(\phi_j^* | \theta)$ and why it is not in subroutine 4 . Should $\phi^*$ be $\hat \phi$ ?
- Sec 4.2: “A straightforward…”: I think it would improve readability to refer back to the to the previous equation (i.e. H) such that it is clear what is meant by “straightforward”.
- Sec 4.2: Several ideas are being discussed in Sec 4.2 and it is not entirely clear to me what has actually been adopted here; perhaps consider formalizing the actual computations in Subroutine 4 – and provide a clearer argument (preferably proof) that this leads to consistent and robust estimator of \theta.
- It is not clear from the text or experiment how the learning parameters are set.
- Sec 5.1: It took some effort to understand exactly what was going on in the example and particular figure 5.1; e.g., in the model definition in the body text there is no mention of the NN mentioned/used in figure 5, the blue points are not defined in the caption, the terminology e.g. “pre-update density” is new at this point. I think it would benefit the readability to provide the reader with a bit more guidance.
- Sec 5.1: While the qualitative example is useful (with a bit more text), I believe it would have been more convincing with a quantitative example to demonstrate e.g. the convergence of the proposal compared to std MAML and possibly compare to a std Bayesian inference method from the HB formulation of the problem (in the linear case)
- Sec 5.2: The abstract clams increased performance over MAML but the empirical results do not seem to be significantly better than MAML ? I find it quite difficult to support the specific claim in the abstract from the results without adding a comment about the significance.
- Sec 5.2: The authors have left out “Mishral et al” from the comparison due to the model being significantly larger than others. Could the authors provide insight into why they did not use the ResNet structure from the tcml paper in their L-MLMA scheme ?
- Sec 6+7: The paper clearly states that it is not the aim to (generally) formulate the MAML as a HB. Given the advancement in gradient based inference for HB the last couple of years (e.g. variational, nested laplace , expectation propagation etc) for explicit models, could the authors perhaps indicate why they believe their approach of looking directly to the MAML objective is more scalable/useful than trying to formulate the same or similar objective in an explicit HB model and using established inference methods from that area ?
Minor:
- Sec 4.1 “…each integral in the sum in (2)…” eq 2 is a product |
iclr_2018_S1GUgxgCW | Despite much success in many large-scale language tasks, sequence-to-sequence (seq2seq) models have not been an ideal choice for conversational modeling as they tend to generate generic and repetitive responses. In this paper, we propose a Latent Topic Conversational Model (LTCM) that augments the seq2seq model with a neural topic component to better model human-human conversations. The neural topic component encodes information from the source sentence to build a global "topic" distribution over words, which is then consulted by the seq2seq model to improve generation at each time step. The experimental results show that the proposed LTCM can generate more diverse and interesting responses by sampling from its learnt latent representations. In a subjective human evaluation, the judges also confirm that LTCM is the preferred option comparing to competitive baseline models. | The paper proposes a conversational model with topical information, by combining seq2seq model with neural topic models. The experiments and human evaluation show the model outperform some the baseline model seq2seq and the other latent variable model variant of seq2seq.
The paper is interesting, but it also has certain limitations:
1) To my understanding, it is a straightforward combination of seq2seq and one of the neural topic models without any justification.
2) The evaluation doesn't show how the topic information could influence word generation. No of the metrics in table 2 could be used to justify the effect of topical information.
3) There is no analysis about the model behavior, therefore there is no way we could get a sense about how the model actually works. One possible analysis is to investigate the values $l_t$ and the corresponding words, which to some extent will tell us how the topical information be used in generation. In addition, it could be even better if there are some analysis about topics extracted by this model.
This paper also doesn't pay much attention to the existing work on topic-driven conversational modeling. For example "Topic Aware Neural Response Generation" from Xing et al., 2017.
Some additional issues:
1) In the second line under equation 4, y_{t-1} -> y_{t}
2) In the first paragraph of section 3, two "MLP"'s are confusing
3) In the first paragraph of page 6, words with "highest inverse document frequency" are used as stop words? |
iclr_2018_r1DPFCyA- | This paper introduces a probabilistic framework for k-shot image classification. The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples. The new approach not only leverages the feature-based representation learned by a neural network from the initial task (representational transfer), but also information about the classes (concept transfer). The concept information is encapsulated in a probabilistic model for the final layer weights of the neural network which acts as a prior for probabilistic k-shot learning. We show that even a simple probabilistic model achieves state-of-the-art on a standard k-shot learning dataset by a large margin. Moreover, it is able to accurately model uncertainty, leading to well calibrated classifiers, and is easily extensible and flexible, unlike many recent approaches to k-shot learning. | Authors present a k-shot learning method that is based on generating representations with a pre-trained network and learning a regularized logistic regression using the available data. The regularised regression is formulated as a MAP estimation problem with the prior estimated from the weights of the original network connected final hidden layer to the logits — before the soft-max layer.
The motivation of the article regarding “concepts” is interesting. It seems especially justified when the training set that is used to train the original network has similar objects as the smaller set that is used for k-shot learning. Maps shown in Figures 6 and 7 provide good motivation for this approach.
Despite the strong motivation, the article raises some concerns regarding the method.
1. The assumption about independence of w vectors across classes is a very strong one and as far as I can see, it does not have a sound justification. The original networks are trained to distinguish between classes. The weight vectors are estimated with this goal. Therefore, it is very likely that vectors of different classes are highly correlated. Going beyond this assumption also seems difficult. The proposed model estimates $\theta^{MAP}$ using only one W matrix, the one that is estimated by training the original network in the most usual way. In this case, the prior over $\theta$ would have a large influence on the MAP estimate and setting it properly becomes important. As far as I can see, there is no good recipe presented in the article for setting this prior.
2. How is the prior model defined? It is the most important component of the method while precise details are not provided. How are the hyperparameters set? Furthermore, this detail needs to be in the main text.
3. With the isotropic assumption on the covariance matrix, the main difference between logistic regression, which is regularized by L2 norm and coefficient set proportional to the empirical variance, and the proposed method seems to be the mean vector $\mu^{MAP}$. From the details provided in the appendix — which should be in the main text in my opinion — I believe this vector is a combination of the prior and mean of w_c across classes. If the prior is set to 0, how different is this vector from 0? Authors should focus on this in my opinion to explain why methods work differently in 1-shot learning. In the other problems, the results suggest they are pretty much the same.
4. Authors’ motivation about concepts is interesting however, if the model bases its prediction on mean of w_c vectors over classes, then I am not sure if authors really achieve what they motivate for.
5. Results are not very convincing. If the method was substantially different than baseline, I believe this would have been no problem. Given the proximity of the proposed method to the baseline with regularised logistic regression, lack of empirical advantage is an issue. If the proposed model works better in the 1-shot scenario, then authors should delve into it to explain the advantage.
Minor comments:
Evaluation in an online setting section is unclear. It needs to be rewritten in my opinion. |
iclr_2018_HJqUtdOaZ | Automatic classification of objects is one of the most important tasks in engineering and data mining applications. Although using more complex and advanced classifiers can help to improve the accuracy of classification systems, it can be done by analyzing data sets and their features for a particular problem. Feature combination is the one which can improve the quality of the features. In this paper, a structure similar to Feed-Forward Neural Network (FFNN) is used to generate an optimized linear or non-linear combination of features for classification. Genetic Algorithm (GA) is applied to update weights and biases. Since nature of data sets and their features impact on the effectiveness of combination and classification system, linear and non-linear activation functions (or transfer function) are used to achieve more reliable system. Experiments of several UCI data sets and using minimum distance classifier as a simple classifier indicate that proposed linear and non-linear intelligent FFNN-based feature combination can present more reliable and promising results. By using such a feature combination method, there is no need to use more powerful and complex classifier anymore. | This paper proposes using a feedforward neural network (FFNN) to extract intermediate features which are input to a 1NN classifier. The parameters of the FFNN are updated via a genetic algorithm with a fitness function defined as the error on the downstream classification, on a held-out set. The performance of this approach is measured on several UCI datasets and compared with baselines.
– The paper’s main contribution seems to be a neural network with a GA optimization for classification that can learn “intelligent combinations of features”, which can be easily classified by a simple 1NN classifier. But isn't this exactly what neural networks do – learn intelligent combinations of features optimized (in this case, via GA) for a downstream task? This has already been successfully applied in multiple domains eg. in computer vision (Krizhevsky et al, NIPS 2011), NLP (Bahdanau et al 2014), image retrieval (Krizhevsky et al. ESANN 2011) etc, and also studied comprehensively in autoencoding literature. There also exists prior work on optimizing neural nets via GA (Leung, Frank Hung-Fat et al., IEEE Transactions on Neural networks 2003). However, this paper claims both as novelties while not offering any improvement / comparison.
– The claim “there is no need to use more powerful and complex classifier anymore” is unsubstantiated, as the paper’s approach still entails using a complex classifier (a FFNN) to learn an optimal intermediate representation.
– The choice of activations is not motivated, and performance on variants is not reported. For instance, why is that particular sigmoid formulation used?
– The use for a genetic algorithm for optimization is not motivated, and no comparison is made to the performance and efficiency of other approaches (like standard backpropagation). So it is unclear why GA makes for a better choice of optimization, if at all.
– The primary baselines compared to are unsupervised methods (PCA and LDA), and so demonstrating improvements over those with a supervised representation does not seem significant or surprising. It would be useful to compare with a simple neural network baseline trained for K-way classification with standard backpropagation (though the UCI datasets may potentially be too small to achieve good performance).
– The paper is poorly written, containing several typos and incomplete, unintelligible sentences, incorrect captions (eg. Table 4) etc. |
iclr_2018_SyPMT6gAb | Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts. One of the major challenge of offpolicy learning is to derive counterfactual estimators that also has low variance and thus low generalization error. In this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks. Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work. With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and is also consistent with our theoretical results. | In this paper the authors studied the problem of off-policy learning, in the bandit setting when a batch log of data generated by the baseline policy is given. Here they first summarize the surrogate objective functions derived by existing approaches such as importance sampling and variance regularization (Swaminathan et. al). Then they extend the results in Theorem 2 of the paper by Cortes et. al (which also uses the empirical Bernstein inequality by Maurer and Pontil), and derive a new surrogate objective function that involves the chi-square divergence. Furthermore, the authors also show that the lower bound of this objective function can be iteratively approximated by variational f-GAN techniques, which could potentially be more numerically stable and empirically has lower variance.
In general, I think the problem studied in this paper is very interesting, and the topic of counterfactual learning, especially policy optimization with the use of offline and off-policy log data, is important. However, I think the theoretical contribution in this paper on off-policy learning is quite incremental. Also the parts that involve f-GAN is still questionable to me.
Detailed comments:
In these variance regularization formulations (for example the one proposed in this paper, or the one derived in Swaminathan's paper), \lambda can be seen as a regularization parameter that trades-off bias and variance of the off-policy value estimator R(h) (for example the RHS of equation 6). To exactly calculate \lambda either requires the size of the policy class (when the policy class is finite), or the complexity constants (which exists in C_1 and C_2 in equation 7, but it is not clearly defined in this paper). Then the main question is on how to choose \lambda such that the surrogate objective function is reasonable. For example in the safety setting (off-policy policy learning with baseline performance guarantees, for example see the problem setting in the paper by P. Thomas 2015: High Confidence off-policy improvement), one always needs the upper-bound in 6) to hold. This makes the choice of \lambda crucial and challenging. Unfortunately I don't see much discussions in this paper about choosing \lambda, even in the context of bias-variance trade-offs. This makes me uncomfortable in believing that the results in experiments hold for other (reasonable) choices of \lambda.
The contribution of this paper is of two-fold: 1) the authors extend the results from Cortes's paper to derive a new surrogate objective function, and 2) they show how this objective can be approximated by f-GAN techniques. The first contribution is rather incremental as it's just a direct application of Theorem 2 in Cortes's paper. Regarding the second contribution, I am a bit concerned about the derivations of Equation 9, especially the first inequality and the second equality. I see that the first inequality is potentially an application of the conjugate function inequality, but more details are needed (f^* is not even defined). For the second equality, it's unclear to me how one can swap the sup and the E_x operators. More explanations are definitely needed to show their mathematical correctness, especially when this part is a main contribution. Even if the derivations are right, the f-GAN surrogate objective is a lower bound of the surrogate objective function, while the surrogate function is an upper bound of the true objective function (which is inaccessible). How does one guarantees that the f-GAN surrogate objective is a reasonable one?
Numerical comparisons between the proposed approach, and the approach from Swaminathan's paper are required to demonstrate the superiority of the proposed approach. Are there comparisons in performance between the approach from the original chi-square surrogate function and the one from the f-GAN objective (in order to showcase the need of using f-GAN) as well?
Minor comments:
In experimental section, method POEM is not defined.
The paper is in an okay status. But there are several minor typos, for example \hat{R}_{(} in page 3, and several typos in Algorithm 1 and Algorithm 2.
In general, I think this paper is studying an interesting topic, but the aforementioned issues make me feel that the paper's current status is still unsuitable for publication. |
iclr_2018_S1m6h21Cb | The Wasserstein probability metric has received much attention from the machine learning community. Unlike the Kullback-Leibler divergence, which strictly measures change in probability, the Wasserstein metric reflects the underlying geometry between outcomes. The value of being sensitive to this geometry has been demonstrated, among others, in ordinal regression and generative modelling, and most recently in reinforcement learning. In this paper we describe three natural properties of probability divergences that we believe reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients. The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third. We provide empirical evidence suggesting this is a serious issue in practice. Leveraging insights from probabilistic forecasting we propose an alternative to the Wasserstein metric, the Cramér distance. We show that the Cramér distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences. We give empirical results on a number of domains comparing these three divergences. To illustrate the practical relevance of the Cramér distance we design a new algorithm, the Cramér Generative Adversarial Network (GAN), and show that it has a number of desirable properties over the related Wasserstein GAN. | The manuscript proposes to use the Cramer distance as a measure between distributions (acting as a loss) when optimizing
an objective function using stochastic gradient descent (SGD). Cramer distance is a Bregman divergence and is a member of the Lp family of divergences. Here a "distance" means a symmetric divergence measure that satisfies the relaxed triangle inequality. The motivation for using the Cramer distance is that it has unbiased sample gradients while still enjoying some other properties such as scale sensitivity and sum invariant. The authors also proof that for the Bernoulli distribution, there is a lower bound independent of the sample size for the deviation between the gradient of the Cramer distance, and the expectation of the estimated gradient of the Cramer distance. Then, the multivariate case of the Cramer distance, called the energy distance, is also briefly presented. The paper closes with some experiments on ordinal regression using neural networks and training GANs using the Cramer distance.
In general, the manuscript is well written and the ideas are smoothly presented. While the manuscript gives some interesting insights, I find that the contribution could have been explained in a more broader sense, with a stronger compelling message.
Some remarks and questions:
1. The KL divergence considered here is sum invariant but not scale sensitive, and has unbiased sample gradients. The
authors are considering here the standard (asymmetric) KL divergence (sec. 2.1). Is it the case that losing scale
sensitivity make the KL divergence insensitive to the geometry of the outcomes? or is it due to the fact the KL
divergence is not symmetric? or ?
2. The main argument for the paper is that the simple sample-based estimate for the gradient using the Wasserstein
metric is a biased estimate for the true gradient of the Wasserstein distance, and hence it is not favored with
SGD-type algorithms. Are there any other estimators in the literature for the gradient of the Wasserstein distance?
Was this issue overlooked in the literature?
3. I am not sure if a biased estimate for the gradient will lead to a ``wrong minimum'' in an energy space that has
infinitely many local minima. Of course one should use an unbiased estimate for the gradient whenever this is possible.
However, even when this is possible, there is no guarantee that this will consistently lead to deeper and ``better''
minima, and there is no guarantee as well that these deep local minima reflect meaningful results.
4. To what extent can one generalize theorem 1 to other probability distributions (continuous and discrete) and to the
multivariate cases as well?
5. I also don't think that the example given in sec. 4.2 and depicted in Fig. 1 is the best and simplest way to illustrate
the benefit of Cramer distance over Wasserstein. Similarly, the experiments for the multivariate case using GANs and
Neural Networks do not really deliver tangible, concrete and conclusive results. Partly, these results are very
qualitative, which can be understood within the context of GANs. However, the authors could have used other
models/algorithms where they can obtain concrete quantitative results (for this type of contribution). In addition,
such sophisticated models (with various hyper-parameters) can mask the true benefit for the Cramer distance, and can
also mask the extent of how good/poor is the sample estimate for the Wasserstein gradient. |
iclr_2018_SJ19eUg0- | Second-order methods for neural network optimization have several advantages over methods based on first-order gradient descent, including better scaling to large mini-batch sizes and fewer updates needed for convergence. But they are rarely applied to deep learning in practice because of high computational cost and the need for model-dependent algorithmic variations. We introduce a variant of the Hessian-free method that leverages a block-diagonal approximation of the generalized Gauss-Newton matrix. Our method computes the curvature approximation matrix only for pairs of parameters from the same layer or block of the neural network and performs conjugate gradient updates independently for each block. Experiments on deep autoencoders, deep convolutional networks, and multilayer LSTMs demonstrate better convergence and generalization compared to the original Hessian-free approach and the Adam method. | Summary:
The paper considers second-order optimization methods for training of neural networks.
In particular, the contribution of the paper is a Hessian-free method that works on blocks of parameters (this is a user defined splitting of the parameters in blocks, e.g., parameters of each layer is one block, or parameters in several layers could constitute a block).
This results into a block-diagonal approximation to the curvature matrix, in order to improve Hessian-free convergence properties: in the latter, a single step might require many CG steps, so the benefit from using second-order information is not apparent.
This is mainly an experimental work, where the authors show the merits of their approach on deep autoencoders, convolutional networks and LSTMs: results show favourable performance compared to the original Hessian-free approach and the Adam method.
Originality:
The paper is based on the works of Collobert (2004) and Le Roux et al. (2008), as well as the work of Martens: the twist is that each layer of the neural network is considered a parameter block, so that gradient interactions among weights in a single layer are more useful than those between weights in different layers. This increases the separability of the problem and reduces the complexity.
Importance:
Understanding the difference between first- and second-order methods for NN training is an important topic. Using second-order methods could be considered at its infancy, compared to the wide variety of first-order methods. Having new results on second-order methods with interesting results would definitely attract some attention at the conference.
Presentation/Clarity:
The paper is well structured and well written. The authors clearly place their work w.r.t. state of the art and previous works, so that it is clear what is new and what is known.
Comments:
1. It is not clear why the deficiency of first-order methods on training NNs with big batches motivates us to turn into second-order methods. Is there a reasoning for this statement? Or is it just because second-order methods are kind-of the only other alternative we have?
2. Assuming we can perform a second-order method, like Newton's method, on a deep NN. Since originally Newton's method was designed to find solutions that have gradient equal to zero, and since NNs have saddle points (probably many more than local minima), even if we could perfectly perform second-order Newton motions, there is no guarantee whether we converge to a local minimum or a saddle point. However, since we perform Newton's method approximately in practice, this might help escaping saddle points. Any comment on this aspect (I'm not aware whether this is already commented in Schraudolph 2002, where the Gauss-Newton matrix was proposed instead of the Hessian)? |
iclr_2018_Syt0r4bRZ | Traditional recurrent neural network (RNN) or convolutional neural network (CNN) based sequence-to-sequence model can not handle tree structural data well. To alleviate this problem, in this paper, we propose a tree-to-tree model with specially designed encoder unit and decoder unit, which recursively encodes tree inputs into highly folded tree embeddings and decodes the embeddings into tree outputs. Our model could represent the complex information of a tree while also restore a tree from embeddings. We evaluate our model in random tree recovery task and neural machine translation task. Experiments show that our model outperforms the baseline model. | Summary: the paper proposes a tree2tree architecture for NLP tasks. Both the encoder and decoder of this architecture make use of memory cells: the encoder looks like a tree-lstm to encode a tree bottom-up, the decoder generates a tree top-down by predicting the number of children first. The objective function is a linear mixture of the cost of generating the tree structure and the target sentence. The proposed architecture outperforms recursive autoencoder on a self-to-self predicting trees, and outperforms an lstm seq2seq on En-Cn translation.
Comment:
- The idea of tree2tree has been around recently but it is difficult to make it work. I thus appreciate the authors’ effort. However, I wish the authors would have done it more properly.
- The computation of the encoder and decoder is not novel. I was wondering how the encoder differs from tree-lstm. The decoder predicts the number of children first, but the authors don’t explain why they do that, nor compare this to existing tree generators.
- I don’t understand the objective function (eq 4 and 5). Both Ls are not cross-entropy because label and childnum are not probabilities. I also don’t see why using Adam is more convenient than using SGD.
- I think eq 9 is incorrect, because the decoder is not Markovian. To see this we can look at recurrent neural networks for language modeling: generating the current word is conditioning on the whole history (not only the previous word).
- I expect the authors would explain more about how difficult the tasks are (eg. some statistics about the datasets), how to choose values for lambda, what the contribution of the new objective is.
About writing:
- the paper has so many problems with wording, e.g. articles, plurality.
- many terms are incorrect, e.g. “dependent parsing tree” (should be “dependency tree”), “consistency parsing” (should be “constituency parsing”)
- In 3.1, Socher et al. do not use lstm
- I suggest the authors to do some more literature review on tree generation |
iclr_2018_rknt2Be0- | Published as a conference paper at ICLR 2018 COMPOSITIONAL OBVERTER COMMUNICATION LEARNING FROM RAW VISUAL INPUT
One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary. Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.g. handengineered features). Humans, however, do not learn to communicate based on well-summarized features. In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. The agents play an image description game where the image contains factors such as colors and shapes. We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding. Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment. | This paper presents a technique for training a two-agent system to play a simple reference game involving recognition of synthetic images of a single object. Each agent is represented by an RNN that consumes an image representation and sequence of tokens as input, and generates a binary decision as output. The two agents are initialized independently and randomly. In each round of training, one agent is selected to be the speaker and the other to be the listener. The speaker generates outputs by greedily selecting a sequence of tokens to maximize the probability of a correct recognition w/r/t the speaker's model. The listener then consumes these tokens, makes a classification decision, incurs a loss, and updates its parameters. Experiments find that after training, the two agents converge to approximately the same language, that this language contains some regularities, and that agents are able to successfully generalize to novel combinations of properties not observed during training.
While Table 3 is suggestive, this paper has many serious problems. There isn't an engineering contribution---despite the motivation at the beginning, there's no attempt to demonstrate that this technique could be used either to help comprehension of natural language or to improve over the numerous existing techniques for automatically learning communication protocols. But this also isn't science: "generalization" is not the same thing as compositionality, and there's no testable hypothesis articulated about what it would mean for a language to be compositional---just the post-hoc analysis offered in Tables 2 & 3. I also have some concerns about the experiment in Section 3.3 and the overall positioning of the paper.
I want to emphasize that these results are cool, and something interesting might be going on here! But the paper is not ready to be published. I apologize in advance for the length of this review; I hope it provides some useful feedback about how future versions of this work might be made more rigorous.
WHAT IS COMPOSITIONALITY?
The title, introduction, and first three sections of this paper emphasize heavily the extent to which this work focuses on discovering "compositional" language. However, the paper doesn't even attempt to define what is meant by compositionality until the penultimate page, where it asserts that the ability to "accurately describe an object [...] not seen before" is "one of the marks of compositional language". Various qualitative claims are made that model outputs "seem to be" compositional or "have the strong flavor of" compositionality. Finally, the conclusion notes that "the exact definition of compositional language is somewhat debatable, and, to the best of our knowledge, there was no reliable way to check for the compositionality of an arbitrary language."
This is very bad.
It is true that there is not a universally agreed-upon definition of compositionality. In my experience, however, most people who study these issues do not (contra the citation-less 5th sentence of section 4) think it is simply an unstructured capacity for generalization. And the implication that nobody else has ever attempted to provide a falsifiable criterion, or that this paper is exempt from itself articulating such a criterion, is totally unacceptable. (You cannot put "giving the reader the tools to evaluate your current claims" in future work!)
If this paper wishes to make any claims about compositionality, it must _at a minimum_:
1. Describe a test for compositionality.
2. Describe in detail the relationship between the proposed test and other definitions of compositionality that exist in the literature.
3. If this compositionality is claimed to be "language-like", extend and evaluate the definition of compositionality to more complex concepts than conjunctions of two predicates.
Some thoughts to get you started: when talking about string-valued things, compositionality almost certainly needs to say something about _syntax_. Any definition you choose will be maximally convincing if it can predict _without running the model_ what strings will appear in the gray boxes in Figure 3. Similarly if it can consistently generate analyses across multiple restarts of the training run. The fact that analysis relies on seemingly arbitrary decisions to ignore certain tokens is a warning sign. The phenomenon where every color has 2--3 different names depending on the shape it's paired with would generally be called "non-compositional" if it appeared in a natural language.
This SEP article has a nice overview and bibliography: https://plato.stanford.edu/entries/compositionality/. But seriously, please, talk to a linguist.
MODEL
The fact that the interpretation model greedily chooses symbols until it reaches a certain confidence threshold would seem to strongly bias the model towards learning a specific communication strategy. At the same time, it's not actually possible to guarantee that there is a greedily-discoverable sequence that ever reaches the threshold! This fact doesn't seem to be addressed.
This approach also completely rules out normal natural language phenomena (consider "I know Mary" vs "I know Mary will be happy to see you"). It is at least worth discussing these limitations, and would be even more helpful to show results for other architectures (e.g. fixed-length codes or loss functions with an insertion penalty) as well.
There's some language in the appendix ("We noticed that a larger vocabulary and a longer message length helped the agents achieve a high communication accuracy more easily. But the resulting messages were challenging to analyze for compositional patterns.") that suggests that even the vague structure observed is hard to elicit, and that the high-level claim made in this paper is less robust than the body suggests. It's _really_ not OK to bury this kind of information in the supplementary material, since it bears directly on your core claim that compositional structure does arise in practice. If the emergence of compositionality is sensitive to vocab size & message length, experiments demonstrating this sensitivity belong front-and-center in the paper.
EVALUATION
The obvious null hypothesis here is that unseen concepts are associated with an arbitrary (i.e. non-compositional) description, and that to succeed here it's enough to recognize this description as _different_ without understanding anything about its structure. So while this evaluation is obviously necessary, I don't think it's powerful enough to answer the question that you've posed. It would be helpful to provide some baselines for reference: if I understand correctly, guessing "0" identically gives 88% accuracy for the first two columns of Table 4, and guessing based on only one attribute gives 94%, which makes some of the numbers a little less impressive.
Perhaps more problematically, these experiments don't rule out the possibility that the model always guesses "1" for unseen objects. It would be most informative to hold out multiple attributes for each held-out color (& vice-versa), and evaluate only with speakers / listeners shown different objects from the held-out set.
POSITIONING AND MOTIVATION
The first sentence of this paper asserts that artificial general intelligence requires the ability to communicate with humans using natural language. This paper has nothing to do with AGI, humans, or human language; to be blunt, this kind of positioning is at best inappropriate and at worst irresponsible. It must be removed. For the assertion that "natural language processing has shown great progress", the paper provides a list of citations employing neural networks exclusively and beginning in 2014 (!). I would gently remind the authors that NLP research did not begin with deep learning, and that there might be slightly earlier evidence for their claim.
The attempt to cite relevant work in philosophy and psychology is commendable! However, many of these citations are problematic, and some psycho-/historico-linguistic claims are still missing citations. A few examples: Ludwig Wittgenstein died in 1951, so it is somewhat surprising to see him cited for a 2010 publication (PI appeared in 1953); similarly Zipf (2016). The application of this Zipf citation is dubious; the sentence preceded by footnote 7 is false and has nothing to do with the processes underlying homophony in natural languages. I would encourage you to consult with colleagues in the relevant fields. |
iclr_2018_HkeJVllRW | We introduce an efficient way to increase the accuracy of convolution neural networks (CNNs) based on high model utilization without increasing any computational complexity. The proposed sparse-complementary convolution replaces regular convolution with sparse and complementary shapes of kernels, covering the same receptive field. By the nature of deep learning, high model utilization of a CNN can be achieved with more simpler kernels rather than fewer complex kernels. This simple but insightful model reuses of recent network architectures, ResNet and DenseNet, can provide better accuracy for most classification tasks (CIFAR-10/100 and ImageNet) compared to their baseline models. By simply replacing the convolution of a CNN with our sparse-complementary convolution, at the same FLOPs and parameters, we can improve top-1 accuracy on ImageNet by 0.33% and 0.18% for ResNet-101 and ResNet-152, respectively. A similar accuracy improvement could be gained by increasing the number of layers in those networks by ∼ 1.5×. | This paper introduces a new design of kernels in convolutional neural networks. The idea is to have sparse but complementary kernels with predefined patterns, which altogether cover the same receptive field as dense kernels. Because of the sparsity of such kernels, deeper or wider networks can be designed at the same computational cost as networks with dense kernels.
Strengths:
- The complementary kernels come at no loss compare to standard ones
- The resulting wider networks can achieve better accuracies than the original ones
Weaknesses:
- The proposed patterns are clear for 3x3 kernels, but no solution is proposed for other dimensions
- The improvement over the baseline is not very impressive
- There is no comparison against other strategies, such as 1xk and kx1 kernels (e.g., Ioannou et al. 2016)
Detailed comments:
- The separation into + and x patterns is quite clear for 3x3 kernels. However, two such patterns would not be sufficient for 5x5 or 7x7 kernels. This idea would have more impact if it generalized to arbitrary kernel dimensions.
- The improvement over the original models are of the order of less than 1 percent. I understand that such improvements are not easy to achieve, but one could wonder if they are not due to the randomness of initialization/mini-batches. It would be more meaningful to report average accuracies and standard deviations over several runs of each experiment.
- Section 4.4 briefly discusses the comparison with using 3x1 and 1x3 kernels, mentioning that an empirical comparison is beyond the scope of this paper. To me, this comparison is a must. In fact, the discussion in this section is not very clear to me, as it mentions additional experiments that I could not find (maybe I misunderstood the authors). What I would like to see is the results of a model based on the method of Ioannou et al, 2016 with the same number of FLOPS.
- In Section 2, the authors review ideas of so-called random kernel sparsity. Note that the work of Wen et al., 2016, and that of Alvarez & Salzmann, NIPS 2016, do not really impose random sparsity, but rather aim to cancel out entire kernels, thus reducing the size of the model and not requiring implementation overhead. They also do not require pre-training and re-training, but just a single training procedure. Note also that these methods often tend not to decrease accuracy, but rather even increase it (by a similar magnitude to that in this paper), for a more compact model.
- In the context of random sparsity, it would be worth citing the work of Collins & Kohli, 2014, Memory Bounded Deep Convolutional Networks.
- I am not entirely convinced by the discussion of the grouped sparsity method in Section 3.1. In fact, the order of the channels is arbitrary, since the kernels are learnt. Therefore, it seems to me that they could achieve the same result. Maybe the authors can clarify this?
- Is there a particular reason why the central points appears in both complementary kernels (+ and x)?
- Why did the authors change the training procedure of ResNets slightly compared to the original paper, i.e., 50k training images instead of 45k training + 5k validation? Did the baseline (original model) reported here also use 50k? What would the results be with 45k?
- Fig. 5 is not entirely clear to me. What was the width of each layer? The original one or the modified one?
- It would be interesting to report the accuracy of a standard ResNet with 1.325*width as a comparison, as well as the runtime of such a model.
- In Table 4, I find it surprising that there is an actual speedup for the model with larger width. I would have expected the same runtime. How do the authors explain this? |
iclr_2018_H1-nGgWC- | GAUSSIAN PROCESS BEHAVIOUR IN WIDE DEEP NEURAL NETWORKS
Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between Gaussian processes with a recursive kernel definition and random wide fully connected feedforward networks with more than one hidden layer. We exhibit limiting procedures under which finite deep networks will converge in distribution to the corresponding Gaussian process. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then exhibit situations where existing Bayesian deep networks are close to Gaussian processes in terms of the key quantities of interest. Any Gaussian process has a flat representation. Since this behaviour may be undesirable in certain situations we discuss ways in which it might be prevented. | In part 1, the authors introduce motivation for studying wide neural networks and summarize related work.
In part 2, they present a theorem (main theoretical result) stating that under conditions on the weight priors, the output function of a multi-layer neural network (conditionally to a given input) weakly converges to a gaussian process as the size of the hidden layers go to infinity.
remark on theorem 1: This result generalizes a result proven in 2015 stating that the normality of a layer propagates to the next as the size of the first layer goes to infinity. The result stated in this paper is proven by bounding the gap between the output distribution and the corresponding gaussian process, and by propagating this bound across layers (appendix).
In part 3, the authors discuss the choice of a nonlinearity function that enables easy computation of the kernels introduced in the covariance matrix of the limit normal distribution. Their choice lands on ReLU.
In part 4, the focus is on the speed of the convergence presented in theorem 1. Experiments are conducted to show how the distance (maximum mean disrepancy) between the output distribution and its theoretical gaussian process limit vary when the sizes of the hidden layers increase. The results show that the convergence (in MMD) happens consistently, although it is slower when the number of hidden layers gets bigger.
In part 5, the authors compare the distributions (finite Bayesian deep networks and their analogues Gaussian processes) in yet another way: by studying their agreement in terms of inference. For this purpose, the authors chose several crieteria: the first two moments of the posterior, the log marginal likelihood and the predictive log-likelihood. The authors judge that the distributions agree on those criteria, but do not provide further analysis.
In part 6, now that It has been shown that the output distributions of Bayesian neural nets do not only weakly converge to Gaussian processes but also behave similarly in terms of inference, the authors discuss ways to avoid the gaussian process behaviour. Indeed, it seems that Gaussian processes with a fixed kernel cannot learn hierarchical representations, that are essential in deep learning.
The idea to avoid the Gaussian process behaviour is to contradict one of the hypothesis of the CLT (so that it does not hold anymore), either by controlling the size of intermediate layers, by using networks with infinite variance in the activities, or by choosing non-independent weights.
In part 7, it is concluded that the result that has been proven for size of layers going to infinity (Theorem 1) seems to empirically be verified on finite networks similar to those used in the literature. This can be used to simplify inference in cases were the gaussian process behaviour is desired, and opens questions on how to avoid this behaviour the rest of the time.
Pros: The authors line of thought of the authors is overall quite easy to follow. The main theoretical convergence result is stated early on, and the remaining of the article is dedicated to observing this result empirically from different angles (MMD, inference, predictive capability..). The last part contains a discussion concerning the extent to which it is actually a desired or a undesired result in classical deep learning use-cases, and the authors provide intuitive conditions under which the convergence would not hold. The stated theorem is a clear improvement on the past literature and is promising in a context where multi-layers neural networks are more and more studied.
Finally, the work is well documented.
Cons:
I have a some concerns with the main result (Theorem 1) and found that some of the notations / formulas were not very clear.
Concerns with Theorem 1:
* at the end of the proof of Lemma 2, H_\mu is to be chosen large enough in order to get the \epsilon bound of the statement. However, I think that H_\mu is constrained by the statement of Proposition 2, not to be larger than a constant times 2^(H_{\mu+1}). Isn't that a problem?
* In the proof of Lemma 4, it looks like matrix \Psi, from the schur decomposition of \tilde f, actually depends on H_{\mu-2}, thus making \psi_max depend on it too, as well as the final \beta bound, which would contradict the statement that it depends only on n and H_{\mu}. Could you please double check?
Unclear statements/notations:
* end of page 3, notations are not entirely consist with previous notations
* I do not understand which distribution is assumed on epsilon and gamma when taking the expectancy in equation (9).
* the notation x^(i) (in the theorem and the proof notably) could be changed, for the ^(i) index refers to the depth of the layer in the rest of the notations, and is here surprisingly referring to a set of observations.
* the statement of Theorem 1:
* I would change "for a countable input set" to "for any countable input set", if this holds true.
* does not say that the width has to go to infinity for the convergence to happen, which goes a bit in contradiction with the adjective "wide". However, the authors say that in practice, they use the identity as width function.
* I understood that the conclusion of part 3 was that the expectation of eq (9) was elegantly computable for certain non-linearity (including ReLU). However I don't see the link with the "recursive kernel" idea (maybe it's just the way to do the computation described in Cho&Saul(2009) ?)
Some places where it appears that there are minor mistakes:
* 7th line from the bottom of page 3, the vector f^{(2)}(x) contains f_i^{(1)}(x) but should contain f_i^{(2)}(x)
* last display of page 3: change x and x', and indicate upper limit of the sum
* please double check variances C_w and/or \hat{C}_w appearing in equations in (9) and (13).
* line 2 of second paragraph after equations (8) and (9). The authors refer to equation (8) concerning the independence of the components of the output. I think they rather wanted to refer to (9). Same for first sentence before eq (14).
* middle of page 12: matrix LY should be RY. |
iclr_2018_By5ugjyCb | Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. To address this cost, a number of quantization schemes have been proposed -but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations. This paper proposes a novel quantization scheme for activations during training -that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. PACT allows quantizing activations to arbitrary bit precisions, while achieving much better accuracy relative to published state-of-the-art quantization schemes. We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets. We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance due to a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories. | The authors have addressed my concerns, and clarified a misunderstanding of the baseline that I had, which I appreciate. I do think that it is a solid contribution with thorough experiments. I still keep my original rating of the paper because the method presented is heavily based on previous works, which limits the novelty of the paper. It uses previously proposed clipping activation function for quantization of neural networks, adding a learnable parameter to this function.
_______________
ORIGINAL REVIEW:
This paper proposes to use a clipping activation function as a replacement of ReLu to train a neural network with quantized weights and activations. It shows empirically that even though the clipping activation function obtains a larger training error for full-precision model, it maintains the same error when applying quantization, whereas training with quantized ReLu activation function does not work in practice because it is unbounded.
The experiments are thorough, and report results on many datasets, showing that PACT can reduce down to 4 bits of quantization of weights and activation with a slight loss in accuracy compared to the full-precision model.
Related to that, it seams a bit an over claim to state that the accuracy decrease of quantizing the DNN with PACT in comparison with previous quantization methods is much less because the decrease is smaller or equal than 1%, when competing methods accuracy decrease compared to the full-precision model is more than 1%. Also, it is unfair to compare to the full-precision model using clipping, because ReLu activation function in full-precision is the standard and gives much better results, and this should be the reference accuracy. Also, previous methods take as reference the model with ReLu activation function, so it could be that in absolute value the accuracy performance of competing methods is actually higher than when using PACT for quantizing DNN.
OTHER COMMENTS:
- the list of contributions is a bit strange. It seams that the true contribution is number 1 on the list, which is to introduce the parameter \alpha in the activation function that is learned with back-propagation, which reduces the quantization error with respect to using ReLu as activation function. To provide an analysis of why it works and quantitative results, is part of the same contribution I would say. |
iclr_2018_BJ7d0fW0b | Imitation learning relies on expert demonstrations. Existing approaches often require that the complete demonstration data, including sequences of actions and states are available. In this paper, we consider a realistic and more difficult scenario where a reinforcement learning agent only has access to the state sequences of an expert, while the expert actions are not available. Inferring the unseen expert actions in a stochastic environment is challenging and usually infeasible when combined with a large state space. We propose a novel policy learning method which only utilizes the expert state sequences without inferring the unseen actions. Specifically, our agent first learns to extract useful sub-goal information from the state sequences of the expert and then utilizes the extracted sub-goal information to factorize the action value estimate over state-action pairs and subgoals. The extracted sub-goals are also used to synthesize guidance rewards in the policy learning. We evaluate our agent on five Doom tasks. Our empirical results show that the proposed method significantly outperforms the conventional DQN method. | SIGNIFICANCE AND ORIGINALITY:
The authors propose to accelerate the learning of complex tasks by exploiting traces of experts.
Unlike the most common form of imitation learning or behavioral cloning, the authors
formulate their solution in the case where the expert’s state trajectory is observable,
but the expert’s actions are not. This is an important and useful problem in robotics and other
applications. Within this specific setting the authors differentiate their approach from others
by developing a solution that does NOT estimate an explicit dynamics model ( e.g., P( S’ | S, A ) ).
The benefits of not estimating an explicit action model are not really demonstrated in a clear way.
The author’s articulate a specific solution that provides heuristic guidance rewards that cause the
learner to favor actions that achieve subgoals calculated from expert behavior
and refactors the representation of the Q function so that it
has a component that is a function of the subgoal extracted from the expert.
These subgoals are linear functions of the expert’s change in state (or change in state features).
The resultant policy is a function of the expert traces on which it depends.
The authors show they can retrain a new policy that does not require the expert traces.
As far as I am aware, this is a novel approach to the problem.
The authors claim that this factorization is important and useful but the paper doesn’t
really illustrate this well.
They demonstrate the usefulness of the algorithm against a DQN baseline on Doom game problems.
The algorithm learns faster than unassisted DQN as shown by learning curve plots.
They also evaluate the algorithms on the quality of the final policies for their approach, DQN,
and a supervised learning from demonstration approach ( LfD ) that requires expert actions.
The proposed approach does as well or better than competing approaches.
QUALITY
Ablation studies show that the guidance rewards are important to achieving the improved performance of the proposed method which is important confirmation that the architecture is working in the intended way. However, it would also be useful to do an ablation study of the “factorization” of action values. Is this important to achieving better results as well or is the guidance reward enough? This seems like a key claim to establish.
CLARITY
The details of the memory based kernel density estimation and neural gradient training seemed
complicated by the way that the process was implemented. Is it possible to communicate
the intuitions behind what is going on?
I was able to work out the intuitions behind the heuristic rewards, but I still don’t clearly get
what the Q-value factorization is providing:
To keep my text readable, I assume we are working in feature space
instead of state space and use different letters for learner and expert:
Learner: S = \phi(s)
Expert’s i^th state visit: Ei = \phi( \hat{s}_i } where Ei’ is the successor state to Ei
The paper builds upon approximate n-step discrete-action Q-learning
where the Q value for an action is a linear function of the state features:
Qp(S,a) = Wa S + Ba
where parameters p = ( Wa, Ba ).
After observing an experience ( S,A,R,S’ ) we use Bellman Error as a loss function to optimize Qp for parameter p.
I ignore the complexities of n-step learning and discount factors for clarity.
Loss = E[ R + MAXa’ Qp(S’,a’) - Qp(S,a) ]
The authors suggest we can augment the environment reward R
with a heuristic reward Rh proportional to the similarity between
the learner “subgoal" and the expert “subgoal" in similar states.
The authors propose to use cosine distance between representations
of what they call the “subgoals” of learner and expert.
A subgoal is defined as a linear transformation of the distance traveled by an agent during a transition.
The heuristic reward is proportional to the cosine distance between the learner and expert “subgoals"
Rh = B < Wv LearnerDirectionInStateS,
Wv ExpectedExpertDirectionInStatesSimilarToS >
The learner’s direction in state S is just (S-S’) in feature space.
The authors model the behavior of the expert as a kernel density type approximator
giving the expected direction of the expert starting from a states similar to the one the learner is in.
Let < Wk S, Wk Ej > be a weighted similarity between learner state features S and expert state features Ej
and Ej’ be the successor state features encountered by the expert.
Then the expected expert direction for learner state S is:
SUMj < Wk S, Wk Ej > ( Ej - Ej’ )
Presumably the linear Wk transform helps us pick out the important dimensions of similarity between S and Ej.
Mapping the learner and expert directions into subgoal space using Wv, the heuristic reward is
Rh = B < Wv (S-S’),
Wv SUMj < Wk S, Wk Ej > ( Ej - Ej’ ) >
I ignore the ReLU here, but I assume that is operates element-wise and just clips negative values?
There is only one layer here so we don’t have complex non-linear things going on?
In addition to introducing a heuristic reward term, the authors propose to alter the Q-function
to be specific to the subgoal.
Q( s,a,g ) = g(S) Wa S + Ba
The subgoal is the same as the first part, namely a linear transform of the expected expert direction in
states similar to state S.
g(S) = Wv SUMj < Wk S, Wk Ej > ( Ej - Ej’ )
So in some sense, the Q function is really just a function of S, as g is calculated from S.
Q( S,a ) = g(S) Wa S + Ba
So this allows the Q-function more flexibility to capture each subgoal in a different linear space?
I don’t really get the intuition behind this formulation. It allows the subgoal to adjust the value
of the underlying model? Essentially the expert defines a new Q-value problem at every state
for the learner? In some sense are we are defining a model for the action taken by the expert?
ADDITIONAL THOUGHTS
While the authors compare to an unassisted baseline, they don’t compare to methods that use an action model
which is not a fatal flaw but would have been nice.
One can imagine there might be scenarios where the local guidance rewards of this
form could be problematic, particularly in scenarios where the expert and learner are not identical
and it is possible to return to previous states, such as the grid worlds the authors discuss:
If the expert’s first few transitions were easily approximable,
the learner would get local rewards that cause it to mimic expert behavior.
However, if the next step in the expert’s path was difficult to approximate,
then the reward for imitating the expert would be lower.
Would the learner then just prefer to go back towards those states that it can approximate and endlessly loop?
In this case, perhaps expressing heuristic rewards as potentials as described in Ng’s shaping paper might solve the problem.
PROS AND CONS
Important problem generally. Avoiding the estimation of a dynamics model was stated as a given, but perhaps more could be put into motivating this goal. Hopefully it is possible to streamline the methodology section to communicate the intuitions more easily. |
iclr_2018_SJQHjzZ0- | Published as a conference paper at ICLR 2018 QUANTITATIVELY EVALUATING GANS WITH DIVERGENCES PROPOSED FOR TRAINING
Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application. However, we currently lack quantitative methods for model assessment. Because ot this, while many GAN variants being proposed, we have relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics and, interestingly, the test-time metrics do not favour networks that use the same training-time criterion. We also compare the proposed metrics to human perceptual scores. | This paper proposes using divergence and distance functions typically used for generative model training to evaluate the performance of various types of GANs. Through numerical evaluation, the authors observed that the behavior is consistent across various proposed metrics and the test-time metrics do not favor networks that use the same training-time criterion.
More specifically, the evaluation metric used in the paper are: 1) Jensen-Shannon divergence, 2) Constrained Pearson chi-squared, 3) Maximum Mean Discrepancy, 4) Wasserstein Distance, and 5) Inception Score. They applied those metrics to compare three different GANs: the standard DCGAN, Wasserstein DCGAN, and LS-DCGAN on MNIST and CIFAR-10 datasets.
Summary:
——
In summary, it is an interesting topic, but I think that the paper does not have sufficient novelty. Some empirical results are still preliminary. It is hard to judge the effectiveness of the proposed metrics for model selection and is not clear that those metrics are better qualitative descriptors to replace visual assessment. In addition, the writing should be improved. See comments below for details and other points.
Comments:
——
1. In Section 3, the evaluation metrics are existing metrics and some of them have already been used in comparing GAN models. Maximum mean discrepancy has been used before in work by Yujia Li et al. (2016, 2017)
2. In the experiments, the proposed metrics were only tested on small scale datasets; the authors should evaluate on larger datasets such as CIFAR-100, Toronto Faces, LSUN bedrooms or CelebA.
3. In the experiments, the authors noted that “Gaussian observable model might not be the ideal assumption for GANs. Moreover, we observe a high log-likelihood at the beginning of training, followed by a drop in likelihood, which then returns to the high value, and we are unable to explain why this happens.” Could the authors give explanation to this phenomenon? The authors should look into this more carefully.
4. In algorithm 1, it seems that the distance is computed via gradient decent. Is it possible to show that the optimization always converges? Is it meaningful to compare the metrics if some of them cannot be properly computed?
5. With many different metrics for assessing GANs, how should people choose? How do we trust the scores? Recently, Fréchet Inception Distance (FID) was proposed to evaluate the samples generated from GANs (Heusel et al. 2017), how are the above scores compared with FID?
Minor Comments:
——
1. Writing should be fixed: “It seems that the common failure case of MMD is when the mean pixel intensities are a better match than texture matches (see Figure 5), and the common failure cases of IS happens to be when the samples are recognizable textures, but the intensity of the samples are either brighter or darker (see Figure 2).” |
iclr_2018_Bki1Ct1AW | Activity of populations of sensory neurons carries stimulus information in both the temporal and the spatial dimensions. This poses the question of how to compactly represent all the information that the population codes carry across all these dimensions. Here, we developed an analytical method to factorize a large number of retinal ganglion cells' spike trains into a robust low-dimensional representation that captures efficiently both their spatial and temporal information. In particular, we extended previously used single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity. On data recorded from retinal ganglion cells with strong pre-stimulus baseline, we showed that in situations where the stimulus elicits a strong change in firing rate, our extensions yield a boost in stimulus decoding performance. Our results thus suggest that taking into account the baseline can be important for finding a compact information-rich representation of neural activity. | This study proposes the use of non-negative matrix factorization accounting for baseline by subtracting the pre-stimulus baseline from each trial and subsequently decompose the data using a 3-way factorization thereby identifying spatial and temporal modules as well as their signed activation. The method is used on data recorded from mouse and pig retinal ganglion cells of time binned spike trains providing improved performance over non-baseline corrected data.
Pros:
The paper is well written, the analysis interesting and the application of the Tucker2 framework sound. Removing baseline is a reasonable step and the paper includes analysis of several spike-train datasets. The analysis of the approaches in terms of their ability to decode is also sound and interesting.
Cons:
I find the novelty of the paper limited:
The authors extend the work by (Onken et al. 2016) to subtract baseline (a rather marginal innovation) of this approach. To use a semi-NMF type of update rule (as proposed by Ding et al .2010) and apply the approach to new spike-train datasets evaluating performance by their decoding ability (decoding also considered in Onken et al. 2016).
Multiplicative update-rules are known to suffer from slow-convergence and I would suspect this also to be an issue for the semi-NMF update rules. It would therefore be relevant and quite easy to consider other approaches such as active set or column wise updating also denoted HALS which admit negative values in the optimization, see also the review by N. Giles
https://arxiv.org/abs/1401.5226
as well as for instance:
Nielsen, Søren Føns Vind, and Morten Mørup. "Non-negative tensor factorization with missing data for the modeling of gene expressions in the human brain." Machine Learning for Signal Processing (MLSP), 2014 IEEE International Workshop on. IEEE, 2014.
It would improve the paper to also discuss that the non-negativity constrained Tucker2 model may be subject to local minima solutions and have issues of non-uniqueness (i.e. rotational ambiguity). At least local minima issues could be assessed using multiple random initializations.
The results are in general only marginally improved by the baseline corrected non-negativity constrained approach. For comparison the existing methods ICA, Tucker2 should also be evaluated for the baseline corrected data, to see if it is the constrained representation or the preprocessing influencing the performance. Finally, how performance is influenced by dimensionality P and L should also be clarified.
It seems that it would be naturally to model the baseline by including mean values in the model rather than treating the baseline as a preprocessing step. This would bridge the entire framework as one model and make it potentially possible to avoid structure well represented by the Tucker2 representation to be removed by the preprocessing.
Minor:
The approach corresponds to a Tucker2 decomposition with non-negativity constrained factor matrices and unconstrained core - please clarify this as you also compare to Tucker2 in the paper with orthogonal factor matrices.
Ding et al. in their semi-NMF work provide elaborate derivation with convergence guarantees. In the present paper these details are omitted and it is unclear how the update rules are derived from the KKT conditions and the Lagrange multiplier and how they differ from standard semi-NMF, this should be better clarified. |
iclr_2018_HkZy-bW0- | Published as a conference paper at ICLR 2018 TEMPORALLY EFFICIENT DEEP LEARNING WITH SPIKES
The vast majority of natural sensory data is temporally redundant. For instance, video frames or audio samples which are sampled at nearby points in time tend to have similar values. Typically, deep learning algorithms take no advantage of this redundancy to reduce computations. This can be an obscene waste of energy. We present a variant on backpropagation for neural networks in which computation scales with the rate of change of the data -not the rate at which we process the data. We do this by implementing a form of Predictive Coding wherein neurons communicate a combination of their state, and their temporal change in state, and quantize this signal using Sigma-Delta modulation. Intriguingly, this simple communication rule give rise to units that resemble biologically-inspired leaky integrate-and-fire neurons, and to a spike-timing-dependent weight-update similar to Spike-Timing Dependent Plasticity (STDP), a synaptic learning rule observed in the brain. We demonstrate that on MNIST, on a temporal variant of MNIST, and on Youtube-BB, a dataset with videos in the wild, our algorithm performs about as well as a standard deep network trained with backpropagation, despite only communicating discrete values between layers. | This paper presents a novel method for spike based learning that aims at reducing the needed computation during learning and testing when classifying temporal redundant data. This approach extends the method presented on Arxiv on Sigma delta quantized networks (Peter O’Connor and Max Welling. Sigma delta quantized networks. arXiv preprint arXiv:1611.02024, 2016b.). Overall, the paper is interesting and promising; only a few works tackle the problem of learning with spikes showing the potential advantages of such form of computing. The paper, however, is not flawless. The authors demonstrate the method on just two datasets, and effectively they show results of training only for Feed-Forward Neural Nets (the authors claim that “the entire spiking network end-to-end works” referring to their pre-trained VGG19, but this paper presents only training for the three top layers). Furthermore, even if suitable datasets are not available, the authors could have chosen to train different architectures. The first dataset is the well-known benchmark MNIST also presented in a customized Temporal-MNIST. Although it is a common base-line, some choices are not clear: why using a FFNN instead that a CNN which performs better on this dataset; how data is presented in terms of temporal series – this applies to the Temporal MNIST too; why performances for Temporal MNIST – which should be a more suitable dataset — are worse than for the standard MNIST; what is the meaning of the right column of Figure 5 since it’s just a linear combination of the GOps results. For the second dataset, some points are not clear too: why the labels and the pictures seem not to match (in appendix E); why there are more training iterations with spikes w.r.t. the not-spiking case. Overall, the paper is mathematically sound, except for the “future updates” meaning which probably deserves a clearer explanation. Moreover, I don’t see why the learning rule equations (14-15) are described in the appendix, while they are referred constantly in the main text. The final impression is that the problem of the dynamical range of the hidden layer activations is not fully resolved by the empirical solution described in Appendix D: perhaps this problem affects CCNs more than FFN.
Finally, there are some minor issues here and there (the authors show quite some lack of attention for just 7 pages):
- Two times “get” in “we get get a decoding scheme” in the introduction;
- Two times “update” in “our true update update as” in Sec. 2.6;
- Pag3 correct the capital S in 2.3.1
- Pag4 Figure 1 increase font size (also for Figure2); close bracket after Equation 3; N (number of spikes) is not defined
- Pag5 “one-hot” or “onehot”;
- in the inline equation the sum goes from n=1 to S, while in eq.(8) it goes from n=1 to N;
- Eq(10)(11)(12) and some lines have a typo (a \cdot) just before some of the ws;
- Pag6 k_{beta} is not defined in the main text;
- Pag7 there are two “so that” in 3.1; capital letter “It used 32x10^12..”; beside, here, why do not report the difference in computation w.r.t. not-spiking nets?
- Pag7 in 3.2 “discussed in 1” is section 1?
- Pag14 Appendix E, why the labels don’t match the pictures;
- Pag14 Appendix F, explain better the architecture used for this experiment. |
iclr_2018_r1h2DllAW | The increasing demand for neural networks (NNs) being employed on embedded devices has led to plenty of research investigating methods for training low precision NNs. While most methods involve a quantization step, we propose a principled Bayesian approach where we first infer a distribution over a discrete weight space from which we subsequently derive hardware-friendly low precision NNs. To this end, we introduce a probabilistic forward pass to approximate the intractable variational objective that allows us to optimize over discrete-valued weight distributions for NNs with sign activation functions. In our experiments, we show that our model achieves state of the art performance on several real world data sets. In addition, the resulting models exhibit a substantial amount of sparsity that can be utilized to further reduce the computational costs for inference. | In this work, discrete-weight NNs are trained using the variational Bayesian framework, achieving similar results to other state-of-the-art models. Weights use 3 bits on the first layer and are ternary on the remaining layers.
- Pros:
The paper is well-written and connections with the literature properly established.
The approach to training discrete-weights NNs, which is variational inference, is more principled than previous works (but see below).
- Cons:
The authors depart from the original motivation when the central limit theorem is invoked. Once we approximate the activations with Gaussians, do we have any guarantee that the new approximate lower bound is actually a lower bound? This is not discussed. If it is not a lower bound, what is the rationale behind maximizing it? This seems to place this work very close to previous works, and not in the "more principled" regime the authors claim to seek.
The likelihood weighting seems hacky. The authors claim "there are usually many more NN weights than there are data samples". If that is the case, then it seems that the prior dominating is indeed the desired outcome. A different, more flat prior (or parameter sharing), can be used, but the described reweighting seems to be actually breaking a good property of Bayesian inference, which is defecting to the prior when evidence is lacking.
In terms of performance (Table 1), the proposed method seems to be on par with existing ones. It is unclear then what the advantage of this proposal is.
Sparsity figures are provided for the current approach, but those are not contrasted with existing approaches. Speedup is claimed with respect to an NN with real weights, but not with respect existing NNs with binary weights, which is the appropriate baseline.
- Minor comments:
Page 3: Subscript t and variable t is used for the targets, but I can't find where it is defined.
Only the names of the datasets used in the experiments are given, but they are not described, or even better, shown in pictures (maybe in a supplementary).
The title of the paper says "discrete-valued NNs". The weights are discrete, but the activations and outputs are continuous, so I find it confusing. As a contrast, I would be less surprised to hear a sigmoid belief network called a "discrete-valued NN", even though its weights are continuous. |
iclr_2018_Bys4ob-Rb | Published as a conference paper at ICLR 2018 CERTIFIED DEFENSES AGAINST ADVERSARIAL EX- AMPLES
While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35% test error. | This paper develops a new differentiable upper bound on the performance of classifier when the adversarial input in l_infinity is assumed to be applied.
While the attack model is quite general, the current bound is only valid for linear and NN with one hidden layer model, so the result is quite restrictive.
However the new bound is an "upper" bound of the worst-case performance which is very different from the conventional sampling based "lower" bounds. Therefore minimizing this upper bound together with a classification loss makes perfect sense and provides a theoretically sound approach to train a robust classifier.
This paper provides a gradient of this new upper bound with respect to model parameters so we can apply the usual first order optimization scheme to this joint optimization (loss + upper bound).
In conclusion, I recommend this paper to be accepted, since it presents a new and feasible direction of a principled approach to train a robust classifier, and the paper is clearly written and easy to follow.
There are possible future directions to be developed.
1. Apply the sum-of-squares (SOS) method.
The paper's SDP relaxation is the straightforward relaxation of Quadratic Program (QP), and in terms of SOS relaxation hierarchy, it is the first hierarchy. One can increase the complexity going beyond the first hierarchy, and this should provides a computationally more challenging but tighter upper bound.
The paper already mentions about this direction and it would be interesting to see the experimental results.
2. Develop a similar relaxation for deep neural networks.
The author already mentioned that they are pursuing this direction. While developing the result to the general deep neural networks might be hard, residual networks maybe fine thanks to its structure. |
iclr_2018_HyiRazbRb | Auto-encoders are commonly used for unsupervised representation learning and for pre-training deeper neural networks. When its activation function is linear and the encoding dimension (width of hidden layer) is smaller than the input dimension, it is well known that auto-encoder is optimized to learn the principal components of the data distribution Oja (1982). However, when the activation is nonlinear and when the width is larger than the input dimension (overcomplete), auto-encoder behaves differently from PCA, and in fact is known to perform well empirically for sparse coding problems. We provide a theoretical explanation for this empirically observed phenomenon, when rectified-linear unit (ReLu) is adopted as the activation function and the hidden-layer width is set to be large. In this case, we show that, with significant probability, initializing the weight matrix of an auto-encoder by sampling from a spherical Gaussian distribution followed by stochastic gradient descent (SGD) training converges towards the ground-truth representation for a class of sparse dictionary learning models. In addition, we can show that, conditioning on convergence, the expected convergence rate is O( 1 t ), where t is the number of updates. Our analysis quantifies how increasing hidden layer width helps the training performance when random initialization is used, and how the norm of network weights influence the speed of SGD convergence. | This paper shows that an idealized version of stochastic gradient descent converges when learning autoencoders with ReLu non-linearities under strong sparsity assumptions. Convergence rates are also determined. The result is another one in the emerging line of proving convergence guarantees for non-convex optimization problems arising in machine learning, and aims to explain certain phenomena experienced in practice.
The paper is generally nicely written, providing intuitions, but there are several typos (both in the text and in the math, e.g., missing indices), which should also be corrected.
On the negative side, while the proof technique in general looks plausible, there seem to be some mistakes in the derivations, which must be corrected before the paper can be accepted. Also, the assumptions in the in the paper seem quite restrictive, and their implications are not discussed thoroughly.
The assumptions are the following:
1. The input data is coming from a mixture distribution, in the form x=w_I + eps, where {w_1,...,w_k} is a collection of unit vectors, I is uniform in {1,...,K}, eps is some noise (independent for each sample).
2. The maximum norm of the noise is O(1/k).
3. The number n of hidden neurons in the autoencoder is Omega(k) (this is not explicitly assumed but is necessary to make the probability of "incorrect" initialization small as well as the results to hold).
Under these assumptions it is shown that the weights of the autoencoder converge to the centers {w_1,...,w_k} (i.e., for any i the autoencoder has at least one weight converging to w_i). The rate of convergence depends on the coherence of the vectors w_i: the less coherent they are the faster the convergence is.
First notice that some assumptions are missing from the main statement, as the error probability delta is certainly connected to the probability of incorrect initialization: when n=1<k, the convergence result clearly cannot hold. This comes from the mistake that in Theorem 3 you state the bound for the probability P(F^\infty) instead of the conditional probability P(F^\infty|E_o) (this is present everywhere in the proof). Theorem 3 should also depend on delta_o, which is used in the definition of F^\infty.
Theorem 2 also seems incorrect. Intuitively, the question is why it cannot happen that two neurons contribute to reproducing a given w_i, and so neither of their weights converge to w_i: E.g., assuming that {w_1,...,w_k,w_1',...,w_k'} form an orthogonal system and the noise is 0, the weight matrix of size n=2k defined as W_{2i-1,*}^T = 1/sqrt{2}(w_i + w'_i) and W_{2i,*}^T=1/sqrt{2}(w_i - w'_i), i \in [k], with 0 bias can exactly recover any x=w_i (indeed, W_{2j-1,*} x= W_{2j,*} x = 1/sqrt{2}, while the other products are 0, and so W^T W x = W^T W w_j = 1/sqrt{2}(W_{2j-1,*}+W_{2j,*})^T = w_j). Then SGD does not change the weights and hence cannot recover the original weights {w_i }, in particular, it cannot increase the coherence in any step, contradicting Theorem 2. This counterexample can be extended even to the situation when k>d, as--in fact--we only need that the existence of a single j such that w_j and w'_j are orthogonal and also orthogonal to the other basis vectors.
The assumptions are also very strange in the sense that the norm of the noise is bounded by O(1/k), thus the more modes the input distribution has the more separable they become. What motivates this scaling? Furthermore, the parameters of the algorithm for which the convergence is claimed heavily depend on the problem parameters, which are not known. How can you instantiate the algorithm then (accepting the ideal definition of b)? What are the consequences?
Given the above, at this point I cannot recommend the paper for acceptance. However, if the above problems are resolved, I would be very happy to see the paper at the conference.
Other comments
-----------------------
- Add a short derivation why the weights of the autoencoder should converge to the w_i.
- Definition 3: C_j is not defined in the main text.
- While it is mentioned multiple times that the interesting regime is d<n, this is actually never used, nor needed (personally, I have never seen such an autoencoder--please give some references). What is really needed is n>k, which is natural if one wants to preserve the information, and also k>d for a rich family of distributions.
- The area of the spherical cap is well understood (up to multiplicative constants), and better bounds than yours are readily available: with a cap of height 1-t, for sqrt{2/d}<t<1, the relative surface of the cap is between P/6 and P/2 where
P=1/(t \sqrt{d}) (1-t^2)^{(d-1)/2}; see, e.g., A. Brieden, P. Gritzmann, R. Kannan, V. Klee, L. Lovasz, and M. Simonovits. Deterministic and randomized polynomial-time approximation of radii. Mathematika. A Journal of Pure and Applied Mathematics, 48(1-2):63–105, 2001.
- The notation section should be brought forward (or referred the fist time the notation is actually used).
- Instead of unit spherical Gaussian you could simply say uniform distribution on the unit sphere
- While Algorithm 1 is called "norm-controlled SGD training," it does not control the norm at all. |
iclr_2018_r1kNDlbCb | Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output. To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora. | Summary: In this work, the authors propose a text reconstructing auto encoder which takes a sentence as the input sequence and an integrated text generator generates another version of the input text while a reconstructor determines how well this generated text reconstructs the original input sequence. The input to the discriminator (as real data) is a sentence that summarizes the ground truth sentences (rather than the ground truth sentences themselves). The experiments are conducted in two datasets of English and Chinese corpora.
Strengths:
The proposed idea of generating text using summary sentences is new.
The model overview in Figure 1 is informative.
The experiments are conducted on English and Chinese corpora, comparison with competitive baselines are provided.
Weaknesses:
The paper is poorly written which makes it difficult to understand. The second paragraph in the introduction is quite cryptic. Even after reading the entire paper a couple of times, it is not clear how the summary text is obtained, e.g. do the authors ask annotators to read sentences and summarize them? If so, based on which criteria do the annotators summarize text, how many annotators are there? Similarly, if so this would mean that the authors use additional supervision than the compared models. Please clarify how the summary text is obtained.
In footnote 1, the authors mention “seq2seq2seq2” term which they do not explain anywhere in the text.
No experiments that generate raw text (without using summaries) are provided. It would be interesting to see if GAN learns to memorize the ground truth sentences or generates sentences with enough variation.
In the English Gigaword dataset the results consistently drop compared to WGAN. This behavior is observed for both the unsupervised setting and two versions of transfer learning settings. There are too few qualitative results: One positive qualitative result is provided in Figure 3 and one negative qualitative result is provided in Figure 4. Therefore, it is not easy for the reader to judge the behavior of the model well.
The choice of the evaluation metric is not well motivated. The standard measures in the literature also include METEOR, CIDER and SPICE. It would be interesting to see how the proposed model performs in these additional criteria. Moreover, the results are not sufficiently discussed.
As a general remark, although the idea presented in this paper is interesting, both in terms of writing and evaluation, this paper has not yet reached the maturity expected from an ICLR paper. Regarding writing, the definite and indefinite articles are sometimes missing and sometimes overused, similarly most of the times there is a singular/plural mismatch. This makes the paper very difficult to read. Often the reader needs to guess what is actually meant. Regarding the experiments, presenting results with multiple evaluation criteria and showing more qualitative results would improve the exposition.
Minor comments:
Page 5: real or false —> real or fake (true or false)
the lower loss it get —> ? |
iclr_2018_SJDYgPgCZ | To provide principled ways of designing proper Deep Neural Network (DNN) models, it is essential to understand the loss surface of DNNs under realistic assumptions. We introduce interesting aspects for understanding the local minima and overall structure of the loss surface. The parameter domain of the loss surface can be decomposed into regions in which activation values (zero or one for rectified linear units) are consistent. We found that, in each region, the loss surface have properties similar to that of linear neural networks where every local minimum is a global minimum. This means that every differentiable local minimum is the global minimum of the corresponding region. We prove that for a neural network with one hidden layer using rectified linear units under realistic assumptions. There are poor regions that lead to poor local minima, and we explain why such regions exist even in the overparameterized DNNs. | This paper attempts to extend analytical results pertaining to the loss surface of linear networks to a nonlinear network with a single hidden ReLU layer. Unfortunately though, at this point I feel that the theoretical results, which constitute the majority of the paper, are of limited novelty and/or significance. However, I still remain very open to counterarguments to this opinion and the points raised below.
First, I don't believe that Lemma 2.2 is precisely true, at least as currently stated. In particular, it would appear that L_f could have a differentiable local minima that is only a saddle point in L_gA. For example, if there is a differentiable valley in L_f that terminates on the boundary of an activation region, then this phenomena could occur, since a local-minima-creating boundary in L_f might just lead to a saddle point in L_gA. Regardless, the basic premise of this result is quite straightforward anyway.
Turning to Lemma 2.3 and 2.4, I don't understand the relevance of these results. Where are they needed later or applied? Additionally, Theorem 2.5 is very related to results already proven for linear networks in earlier work (Kawaguchi, 2016), so there is little novelty here.
There also seem to be issues with Corollary 2.7, which as an aggregation result can be viewed as the main contribution of the paper. Part (1) of this corollary is obvious. Part (2) depends on Lemma 2.2, which as stated previously may be problematic. Most seriously though, Part (3) only considers critical points (i.e., derivative equal to zero), not local minima occurring at non-differentiable locations. To me this greatly mutes the value of this result, and the contribution of the paper overall, because local minimum are *very* likely to occur on the boundary between activation regions at non-differentiable points (e.g. as in Figure 2). I therefore don't understand the utility of only considering the differentiable local minima.
Overall though, the main point that within areas of fixed activation the network behaves much like a linear network (with all local minima also global minima when constrained within each region), is not especially noteworthy, because it provides no pathway for comparing minima from different activation regions, which is the main problem to begin with.
Beyond this, the paper makes a few less-technical observations regarding bad local minima. For example, in Section 3.1 the argument is made that the linear region created when all activations are equal to one, will have a local minimum, and this minimum might be suboptimal. However, these arguments pertain to the surrogate function L_gA, and if the minima to L_gA occurs on the boundary to another activation region, then this solution might not be a local minima to L_f, the real objective we care about. Am I missing something here?
As for Section 4.2, the paper needs to do a better job of explaining exactly what is been shown in Table 2. I can maybe guess, but it is not at all clear what the accuracy percentage is referring to, nor precisely how rich and random minima are computed. Also, the assumption that P(a = 1) = P(a = 0) = 0.5 is not very realistic, although admittedly this type of simplification is sometimes adopted in the literature.
Minor comment:
* Near the beginning in the introduction, it is claimed that "the vanishing gradient problem has been solved by using rectified linear units." This is not actually true, and portends problematic claims later in the paper. |
iclr_2018_ByED-X-0W | PARAMETRIC INFORMATION BOTTLENECK TO OPTIMIZE STOCHASTIC NEURAL NETWORKS
In this paper, we present a layer-wise learning of stochastic neural networks (SNNs) in an information-theoretic perspective. In each layer of an SNN, the compression and the relevance are defined to quantify the amount of information that the layer contains about the input space and the target space, respectively. We jointly optimize the compression and the relevance of all parameters in an SNN to better exploit the neural network's representation. Previously, the Information Bottleneck (IB) framework (Tishby et al. (1999)) extracts relevant information for a target variable. Here, we propose Parametric Information Bottleneck (PIB) for a neural network by utilizing (only) its model parameters explicitly to approximate the compression and the relevance. We show that, the PIB framework can be considered as an extension of the maximum likelihood estimate (MLE) principle to every layer level. We also show that, as compared to the MLE principle, PIB : (i) improves the generalization of neural networks in classification tasks, (ii) is more efficient to exploit a neural network's representation by pushing it closer to the optimal information-theoretical representation in a faster manner. | This paper proposes a learning method (PIB) based on the information bottleneck framework.
PIB pursues the very natural intuition outlined in the information bottleneck literature: hidden layers of deep nets compress the input X while maintaining sufficient information to predict the output Y.
It should be noted that the limitations of the IB for deep learning are currently under heavy discussion on OpenReview.
Optimizing the PIB objective is intractable and the authors propose an approximation that applies to binary valued stochastic networks.
They use a variational bound to deal with the relevance term, I(Z_l,Y), and Monte Carlo sampling to deal with the layer-by-layer compression term, I(Z_l,Z_{l+1}).
They present results on MNIST aiming to demonstrate that using PIBs improves generalization and training speed.
This is a timely and interesting topic. I enjoyed learning about the authors’ proposed approach to a practical learning method based on the information bottleneck. However, the writing made it challenging and the experimental protocol raised some serious questions. In summary, I think the paper needs very careful editing for grammar and language and, more importantly, it needs solid experiments before it’s ready for publication. When that is done it would make an exciting contribution to the community. More details follow.
Comments:
1. All architectures and objectives (both classic and PIB-based) are trained using a single, fixed learning rate (LR). In my opinion, this is a red flag. The PIB objective is new and different to the other objectives. Do all objectives happen to yield their best performance under the same LR? Maybe so, but we won’t know unless the experimental protocol prescribes a sufficient range of LRs for each architecture. In light of this, the fact that SFNN is given extra epochs in Figure 4 does not mean much.
2. The batch size for MNIST classification is unusually low (8). Common batch sizes range from 64 to 1K (typically >= 128). Why did the authors make this choice? Is 8 good for architectures A through E?
3. On a related note, the authors only seem to report results from a single random seed (ie. deterministic architectures are trained exactly once). I would like to see results from a few different random seeds. As a result of comments 1,2,3, even though I do believe in the merit of the intuition pursued and the techniques proposed, I am not convinced about the main claim of the paper. In particular, the experiments are not rigorous enough to give serious evidence that PIBs improve generalization and training speed.
4. The paper needs some careful editing both for language (cf. following point) but also notation. The authors use notation p_D() in eqn (12) without defining it. My best guess is that it is the same as p_u(), the underlying data distribution, but makes parsing the paper hard. Finally there are a few steps that are not explained: for example, no justification is given for the inequality in eqn (13).
5. Language: the paper needs some careful editing to correct numerous language/grammar issues. At times it is detrimental to understanding. For example I had to read the text leading up to eqn (8) a number of times.
6. There is no discussion of computational complexity and wall-clock time comparisons. To be clear, I think that even if the proposed approach were to be slower than the state of the art it would still be very interesting. However, there should be some discussion and reporting of that aspect as well.
Minor comments and questions:
7. Mutual information is typically typeset using a semicolon instead of a comma, eg. I(X;Z).
8. Why is the mutual information in Figure 3 so low? Are you perhaps using natural logarithms to estimate and plot I(Z;Y)? If this is base-2 logarithms I would expect a value close to 1. |
iclr_2018_HJDUjKeA- | We show how discrete objects can be learnt in an unsupervised fashion from pixels, and how to perform reinforcement learning using this object representation. More precisely, we construct a differentiable mapping from an image to a discrete tabular list of objects, where each object consists of a differentiable position, feature vector, and scalar presence value that allows the representation to be learnt using an attention mechanism. Applying this mapping to Atari games, together with an interaction net-style architecture for calculating quantities from objects, we construct agents that can play Atari games using objects learnt in an unsupervised fashion. During training, many natural objects emerge, such as the ball and paddles in Pong, and the submarine and fish in Seaquest. This gives the first reinforcement learning agent for Atari with an interpretable object representation, and opens the avenue for agents that can conduct objectbased exploration and generalization. | This paper learns to construct masks and feature representations from an input image, in order to represent objects. This is applied to the relatively simple domain of Atari games video input (compared to natural images). The paper is completely inadequate in respect to related work; it re-invents known techniques like non-maximum suppression and matching for tracking; fails to learn convincing objects according to visual inspection; and fails to compare with earlier methods for these tasks. (The comment above about re-invention is the most charitable intepretation -- the worst case would be using these ideas without citation.)
1) The related work section is outrageous, containing no references before 2016. Do the authors think researchers never tried to do this task before then? This is the bad side of the recent deep nets hype, and ICLR is particularly susceptible to this. Examples include
@article{wang-adelson-94,
author = "Wang, J. Y. A. and Adelson, E. H.",
title = {{Representing Moving Images with Layers}},
journal = {{IEEE Transactions on Image Processing}},
year = "1994",
volume = "3(5)",
pages = {625-638}
}
see http://persci.mit.edu/pub_pdfs/wang_tr279.pdf
and
@article{frey-jojic-03,
author = {Frey, B. J. and Jojic, N.},
title = {{Transformation Invariant Clustering Using the EM Algorithm}},
journal = {IEEE Trans Pattern Analysis and Machine Intelligence},
year = {2003},
volume = {25(1)},
pages = {1-17}
}
where mask and appearances for each object of interest are learned. There is a literature which follows on from the F&J paper. The methods used in Frey & Jojic are different from what is proposed in the paper, but there needs to be comparisons.
The AIR paper also contains references to relevant previous work.
2) p 3 center -- this seems to be reinventing non-maximum suppression
3) p 4 eq 3 and sec 3.2 -- please justify *why* it makes sense to use
the concrete transform. Can you explain better (e.g. in the supp mat)
the effect of this for different values of q_i?
4) Sec 3.5 Matching objects in successive frames using the Hungarian
algorithm is also well known, e.g. it is in the matlab function
assignDetectionsToTracks .
5) Overall: in this paper the authors come up with a method for learning objects from Atari games video input. This is a greatly restricted setting compared to real images. The objects learned as shown in Appendix A are quite unconvincing, e.g. on p 9. For example for Boxing why are the black and white objects broken up into 3 pieces, and why do they appear coloured in col 4?
Also the paper lacks comparisons to other methods (including ones from before 2016) which have tackled this problem.
It may be that the methods in this paper can outperform previous ones -- that would be interesting, but it would need a lot of work to address the issues raised above.
Text corrections:
p 2 "we are more precise" -> "we give more details"
p 3 and p 2 -- local maximum (not maxima) for a single maximum. [occurs many times] |
iclr_2018_rk49Mg-CW | STOCHASTIC VARIATIONAL VIDEO PREDICTION
Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images require the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world videos. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication. | 1) Summary
This paper proposed a new method for predicting multiple future frames in videos. A new formulation is proposed where the frames’ inherent noise is modeled separate from the uncertainty of the future. This separation allows for directly modeling the stochasticity in the sequence through a random variable z ~ p(z) where the posterior q(z | past and future frames) is approximated by a neural network, and as a result, sampling of a random future is possible through sampling from the prior p(z) during testing. The random variable z can be modeled in a time-variant and time-invariant way. Additionally, this paper proposes a training procedure to prevent their method from ignoring the stochastic phenomena modeled by z. In the experimental section, the authors highlight the advantages of their method in 1) a synthetic dataset of shapes meant to clearly show the stochasticity in the prediction, 2) two robotic arm datasets for video prediction given and not given actions, and 3) A challenging human action dataset in which they perform future prediction only given previous frames.
2) Pros:
+ Novel/Sound future frame prediction formulation and training for modeling the stochasticity of future prediction.
+ Experiments on the synthetic shapes and robotic arm datasets highlight the proposed method’s power of multiple future frame prediction possible.
+ Good analysis on the number of samples improving the chance of outputting the correct future, the modeling power of the posterior for reconstructing the future, and a wide variety of qualitative examples.
+ Work is significant for the problem of modeling the stochastic nature of future frame prediction in videos.
3) Cons:
Approximate posterior in non-synthetic datasets:
The variable z seems to not be modeling the future very well. In the robot arm qualitative experiments, the robot motion is well modeled, however, the background is not. Given that for the approximate posterior computation the entire sequence is given (e.g. reconstruction is performed), I would expect the background motion to also be modeled well. This issue is more evident in the Human 3.6M experiments, as it seems to output blurriness regardless of the true future being observed. This problem may mean the method is failing to model a large variety of objects and clearly works for the robotic arm because a very similar large shape (e.g. robot arm) is seen in the training data. Do you have any comments on this?
Finn et al 2016 PNSR performance on Human 3.6M:
Is the same exact data, pre-processing, training, and architecture being utilized? In her paper, the PSNR for the first timestep on Human 3.6M is about 41 (maybe 42?) while in this paper it is 38.
Additional evaluation on Human 3.6M:
PSNR is not a good evaluation metric for frame prediction as it is biased towards blurriness, and also SSIM does not give us an objective evaluation in the sense of semantic quality of predicted frames. It would be good if the authors present additional quantitative evaluation to show that the predicted frames contain useful semantic information [1, 2, 3, 4]. For example, evaluating the predicted frames for the Human 3.6M dataset to see if the human is still detectable in the image or if the expected action is being predicted could be useful to verify that the predicted frames contain the expected meaningful information compared to the baselines.
Additional comments:
Are all 15 actions being used for the Human 3.6M experiments? If so, the fact of the time-invariant model performs better than the time-variant one may not be the consistent action being performed (last sentence of 5.2). The motion performed by the actors in each action highly overlaps (talking on the phone action may go from sitting to walking a little to sitting again, and so on). Unless actions such as walking and discussion were only used, it is unlikely the time-invariant z is performing better because of consistent action. Do you have any comments on this?
4) Conclusion
This paper proposes an interesting novel approach for predicting multiple futures in videos, however, the results are not fully convincing in all datasets. If the authors can provide additional quantitative evaluation besides PSNR and SSIM (e.g. evaluation on semantic quality), and also address the comments above, the current score will improve.
References:
[1] Emily Denton and Vighnesh Birodkar. Unsupervised Learning of Disentangled Representations from Video. In NIPS, 2017.
[2] Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, and Honglak Lee. Learning to generate long-term future via hierarchical prediction. In ICML, 2017.
[3] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv preprint arXiv:1710.10196, 2017.
[4] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved Techniques for Training GANs. In NIPS, 2017.
Revised review:
Given the authors' thorough answers to my concerns, I have decided to change my score. I would like to thank the authors for a very nice paper that will definitely help the community towards developing better video prediction algorithms that can now predict multiple futures. |
iclr_2018_BJ8c3f-0b | AUTO-ENCODING SEQUENTIAL MONTE CARLO
We build on auto-encoding sequential Monte Carlo (AESMC):
1 a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for performing inference in structured probabilistic models and the flexibility of deep neural networks to model complex conditional probability distributions. We develop additional theoretical insights and experiment with a new training procedure which can improve both model and proposal learning. We demonstrate that our approach provides a fast, easy-to-implement and scalable means for simultaneous model learning and proposal adaptation in deep generative models. | Update:
On further consideration (and reading the other reviews), I'm bumping my rating up to a 7. I think there are still some issues, but this work is both valuable and interesting, and it deserves to be published (alongside the Naesseth et al. and Maddison et al. work).
-----------
This paper proposes a version of IWAE-style training that uses SMC instead of classical importance sampling. Going beyond the several papers that proposed this simultaneously, the authors observe a key issue: the variance of the gradient of these IWAE-style bounds (w.r.t. the inference parameters) grows with their accuracy. They therefore propose using a more-biased but lower-variance bound to train the inference parameters, and the more-accurate bound to train the generative model.
Overall, I found this paper quite interesting. There are a few things I think could be cleared up, but this seems like good work (although I'm not totally up to date on the very recent literature in this area).
Some comments:
* Section 4: I found this argument extremely interesting. However, it’s worth noting that your argument implies that you could get an O(1) SNR by averaging K noisy estimates of I_K. Rainforth et al. suggest this approach, as well as the approach of averaging K^2 noisy estimates, which the theory suggests may be more appropriate if the functions involved are sufficiently smooth, which even for ReLU networks that are non-differentiable at a finite number of points I think they should be.
This paper would be stronger if it compared with Rainforth et al.’s proposed approaches. This would demonstrate the real tradeoffs between bias, variance, and computation. Of course, that involves O(K^2) or O(K^3) computation, which is a weakness. But one could use a small value of K (say, K=5).
That said, I could also imagine a scenario where there is no benefit to generating multiple noisy samples for a single example versus a single noisy sample for multiple examples. Basically, these all seem like interesting and important empirical questions that would be nice to explore in a bit more detail.
* Section 3.3: Claim 1 is an interesting observation. But Propositions 1 and 2 seem to just say that the only way to get a perfectly tight SMC ELBO is to perfectly sample from the joint posterior. I think there’s an easier way to make this argument:
Given an unbiased estimator \hat{Z} of Z, by Jensen’s inequality E[log \hat{Z}] ≤ log Z, with equality iff the variance of \hat{Z} = 0. The only way to get an SMC estimator’s variance to 0 is to drive the variance of the weights to 0. That only happens if you perfectly sample each particle from the true posterior, conditioned on all future information.
All of which is true as far as it goes, but I think it’s a bit of a distraction. The question is not “what’s it take to get to 0 variance” but “how quickly can we approach 0 variance”. In principle IS and SMC can achieve arbitrarily high accuracy by making K astronomically large. (Although [particle] MCMC is probably a better choice if one wants extremely low bias.)
* Section 3.2: The choice of how to get low-variance gradients through the ancestor-sampling choice seems seems like an important technical challenge in getting this approach to work, but there’s only a very cursory discussion in the main text. I would recommend at least summarizing the main findings of Appendix A in the main text.
* A relevant missing citation: Turner and Sahani’s “Two problems with variational expectation maximisation for time-series models” (http://www.gatsby.ucl.ac.uk/~maneesh/papers/turner-sahani-2010-ildn.pdf). They discuss in detail some examples where tighter variational bounds in state-space models lead to worse parameter estimates (though in a quite different context and with a quite different analysis).
* Figure 1: What is the x-axis here? Presumably phi is not actually 1-dimensional?
Typos etc.:
* “learn a particular series intermediate” missing “of”.
* “To do so, we generate on sequence y1:T” s/on/a/, I think?
* Equation 3: Should there be a (1/K) in Z? |
iclr_2018_HJIoJWZCZ | ADVERSARIAL DROPOUT REGULARIZATION
We present a domain adaptation method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by "fooling" a special domain classifier network. However, a drawback of this approach is that the domain classifier simply labels the generated features as in-domain or not, without considering the boundaries between classes. This means that ambiguous target features can be generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), which encourages the generator to output more discriminative features for the target domain. Our key idea is to replace the traditional domain critic with a critic that detects non-discriminative features by using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvements over the state of the art. | Unsupervised Domain adaptation is the problem of training a classifier without labels in some target domain if we have labeled data from a (hopefully) similar dataset with labels. For example, training a classifier using simulated rendered images with labels, to work on real images.
Learning discriminative features for the target domain is a fundamental problem for unsupervised domain adaptation. The problem is challenging (and potentially ill-posed) when no labeled examples are given in the target domain. This paper proposes a new training technique called ADR, which tries to learn discriminative features for the target domain. The key idea of this technique is to move the target-domain features away from the source-domain decision boundary. ADR achieves this goal by encouraging the learned features to be robust to the dropout noise applied to the classifier.
My main concern about this paper is that the idea of "placing the target-domain features far away from the source-domain decision boundary" does not necessarily lead to *discriminative features* for the target domain. In fact, it is easy to come up with a counter-example: the target-domain features are far from the *source-domain* decision boundary, but they are all (both the positive and negative examples) on the same side of the boundary, which leads to poor target classification accuracy. The loss function (Equations 2-5) proposed in the paper does not prevent the occurrence of this counter-example.
Another concern comes from using the proposed idea in training a GAN (Section 4.3). Generating fake images that are far away from the boundary (as forced by the first term of Equation 9) is somewhat opposite to the objective of GAN training, which aims at aligning distributions of real and fake images. Although the second term of Equation 9 tries to make the generated and the real images similar, the paper does not explain how to properly balance the two terms of Equation 9. As a result, I am worried that the proposed method may lead to more mode-collapsing for GAN.
The experimental evaluation seems solid for domain adaptation. The semi-supervised GANs part seemed significantly less developed and might be weakening rather than strengthening the paper.
Overall the performance of the proposed method is quite well done and the results are encouraging, despite the lack of theoretical foundations for this method. |
iclr_2018_S1FFLWWCZ | Multi-view recognition is the task of classifying an object from multi-view image sequences. Instead of using a single-view for classification, humans generally navigate around a target object to learn its multi-view representation. Motivated by this human behavior, the next best view can be learned by combining object recognition with navigation in complex environments. Since deep reinforcement learning has proven successful in navigation tasks, we propose a novel multi-task reinforcement learning framework for joint multi-view recognition and navigation. Our method uses a hierarchical action space for multi-task reinforcement learning. The framework was evaluated with an environment created from the ModelNet40 dataset. Our results show improvements on object recognition and demonstrate human-like behavior on navigation. | The ambition of this paper is to address multi-view object recognition and the associated navigation as a unified reinforcement learning problem using a deep CNN to represent the policy.
Multi-view recognition and active viewpoint selection have been studied for more than 30 years, but this paper ignores most of this history. The discussion of related work as well as the empirical evaluation are limited to very recent methods using neural networks. I encourage the authors to look e.g. at Paletta and Pinz [1] (who solve a very similar and arguably harder problem in related ways) and at Bowyer & Dyer [2] as well as the references contained in these papers for history and context. Active vision goes back to Bajcsy, Aloimonos, and Ballard; these should be cited instead of Ammirato et al. Conversely, the related work cites a handful of papers (e.g. in the context of Atari 2600 games) that are unrelated to this work.
The navigation aspect is limited to fixed-size left or right displacements (at least for ModelNet40 task which is the only one to be evaluated and discussed). This is strictly weaker than active viewpoint selection. Adding this to the disregard of prior work, it is (at best) misleading to claim that this is "the first framework to combine learning of navigation and object recognition".
Calling this "multi-task" learning is also misleading. There is only one ultimate objective (object recognition), while the agent has two types of actions available (moving or terminating with a classification decision).
There are other misleading, vague, or inaccurate statements in the paper, for example:
- "With the introduction of deep learning to reinforcement learning, there has been ... advancements in understanding ... how humans navigate": I don't think such a link exists; if it does, a citation needs to be provided.
- "inductive bias like image pairs": Image pairs do not constitute inductive bias. Either the term is misused or the wording must be clarified; likewise for other occurrences of "inductive bias".
- "a single softmax layer is biased towards tasks with larger number of actions": I think I understand what this is intended to say, but a "softmax layer" cannot be "biased towards tasks" as there is only one, given, task.
- I do not understand what the stated contribution of "extrapolation of the action space to a higher dimension for multi-task learning" is meant to be.
- "Our method performs better ... than state-of-the-art in training for navigation to the object": The method does not involve "navigation to the object", at least not for the ModelNet40 dataset, the only for which results are given.
It is not clear what objective function the system is intended to optimize. Since the stated task is object recognition and from Table 2 I was expecting it to be the misclassification rate, but this is clearly not the case, as the system is not set up to minimize it. What "biases" the system towards classification actions (p. 5)? Why is it bad if the agent shows "minimal movement actions" as long as the misclassification rate is minimized? No results are given to show whether this is the case or not. The text then claims that the "hierarchical method gives superior results", but this is not shown either.
Table 3 reveals that the system fails to learn much of interest at all. Much of the time the agent chooses not to move and performs relatively poorly; taking more steps improves the results; often all 12 views are collected before a classification decision is made. Two of the most important questions remain open: (1) What would be the misclassification rate if all views are always used? (2) What would be the misclassification rate under a random baseline policy not involving navigation learning (e.g., taking a random number of steps in the same direction)?
Experiments using the THOR dataset are announced but are left underspecified (e.g., the movement actions), but no results or discussion are given.
SUMMARY
Quality: lacking in may ways; see above.
Clarity: Most of the paper is clear enough, but there are confusions and missing information about THOR and problems with phrasing and terminology. Moreover, there are many grammatical and typographical glitches.
Originality: Harder tasks have been looked at before (using methods other than CNN). Solving a simpler version using CNN I do not consider original unless there is a compelling pay-off, which this paper does not provide.
Significance: Low.
Pros: The problem would be very interesting and relevant if it was formulated in a more ambitious way (e.g., a more elaborate action space than that used for ModelNet40) with a clear objective function,
Cons: See above.
[1] Lucas Paletta and Axel Pinz, Active object recognition by view integration and reinforcement learning, Robotics and Autonomous Systems 31, 71-86, 2000
[2] Bowyer, K. W. and Dyer, C. R. (1990), Aspect graphs: An introduction and survey of recent results. Int. J. Imaging Syst. Technol., 2: 315–328. doi:10.1002/ima.1850020407 |
iclr_2018_r17lFgZ0Z | Automated metrics such as BLEU are widely used in the machine translation literature. They have also been used recently in the dialogue community for evaluating dialogue response generation. However, previous work in dialogue response generation has shown that these metrics do not correlate strongly with human judgment in the non task-oriented dialogue setting. Task-oriented dialogue responses are expressed on narrower domains and exhibit lower diversity. It is thus reasonable to think that these automated metrics would correlate well with human judgment in the task-oriented setting where the generation task consists of translating dialogue acts into a sentence. We conduct an empirical study to confirm whether this is the case. Our findings indicate that these automated metrics have stronger correlation with human judgments in the task-oriented setting compared to what has been observed in the non task-oriented setting. We also observe that these metrics correlate even better for datasets which provide multiple ground truth reference sentences. In addition, we show that some of the currently available corpora for task-oriented language generation can be solved with simple models and advocate for more challenging datasets. | The authors present a solid overview of unsupervised metrics for NLG, and perform a correlation analysis between these metrics and human evaluation scores on two task-oriented dialog generation datasets using three LSTM-based models. They find weak but statistically significant correlations for a subset of the evaluated metrics, an improvement over the situation that has been observed in open-domain dialog generation.
Other than the necessarily condensed model section (describing a model explained at greater length in a different work) the paper is quite clear and well-written throughout, and the authors' explication of metrics like BLEU and greedy matching is straightforward and readable. But the novel work in the paper is limited to the human evaluations collected and the correlation studies run, and the authors' efforts to analyze and extend these results fall short of what I'd like to see in a conference paper.
Some other points:
1. Where does the paper's framework for response generation (i.e., dialog act vectors and delexicalized/lexicalized slot-value pairs) fit into the landscape of task-oriented dialog agent research? Is it the dominant or state-of-the-art approach?
2. The sentence "This model is a variant of the “ld-sc-LSTM” model proposed by Sharma et al. (2017) which is based on an encoder-decoder framework" is ambiguous; what is apparently meant is that Sharma et al. (2017) introduced the hld-scLSTM, not simply the ld-scLSTM.
3. What happens to the correlation coefficients when exact reference matches (a significant component of the highly-rated upper right clusters) are removed?
4. The paper's conclusion naturally suggests the question of whether these results extend to more difficult dialog generation datasets. Can the authors explain why the datasets used here were chosen over e.g. El Asri et al. (2017) and Novikova et al. (2016)? |
iclr_2018_SyYe6k-CW | Published as a conference paper at ICLR 2018 DEEP BAYESIAN BANDITS SHOWDOWN AN EMPIRICAL COMPARISON OF BAYESIAN DEEP NETWORKS FOR THOMPSON SAMPLING
Recent advances in deep reinforcement learning have made significant strides in performance on applications such as Go and Atari games. However, developing practical methods to balance exploration and exploitation in complex domains remains largely unsolved. Thompson Sampling and its extension to reinforcement learning provide an elegant approach to exploration that only requires access to posterior samples of the model. At the same time, advances in approximate Bayesian methods have made posterior approximation for flexible neural network models practical. Thus, it is attractive to consider approximate Bayesian neural networks in a Thompson Sampling framework. To understand the impact of using an approximate posterior on Thompson Sampling, we benchmark well-established and recently developed methods for approximate posterior sampling combined with Thompson Sampling over a series of contextual bandit problems. We found that many approaches that have been successful in the supervised learning setting underperformed in the sequential decision-making scenario. In particular, we highlight the challenge of adapting slowly converging uncertainty estimates to the online setting. | If two major questions below are answered affirmatively, I believe this article could be very good contribution to the field and deserve publication in ICLR.
In this article the authors provide a service to the community by comparing the current most used algorithms for Thompson Sampling-based contextual (parametric) bandits on clear empirical benchmark. They reimplement the key algorithms, investing time to make up for the lack of published source code for some.
After a clear exposure of the reasons why Thompson Sampling is attractive, they overview concisely the key ideas behind 7 different families of algorithms, with proper literature review. They highlight some of the subtleties of benchmarking bandit problems (or any active learning algorithms for that matter): the lack of counterfactual and hence the difference in observed datasets. They explain their benchmark framework and datasets, then briefly summarise the results for each class of algorithms. Most of the actual measures from the benchmark are provided in a lengthy appendix 12 pages appendix choke-full of graphs and tables.
It is refreshing to see an article that does not boast to offer the new "bestest-ever" algorithm in town, overcrowding a landscape, but instead tries to prune the tree of possibilities and wading through other people's inflated claims. To the authors: thank you! It is too easy to dismiss these articles as "pedestrian non-innovative groundwork": if there were more like it, our field would certainly be more readable and less novelty-prone.
Of course, there is no perfect benchmark, and like every benchmark, the choices made by the authors could be debated to no end. At least, the authors try to explain them, and the tradeoffs they faced, as clearly as possible (except for two points mentioned below), which again is too rare in our field.
Major clarifications needed:
My two key questions are:
* Is the code of good quality, with exact reproducibility and good potential extension in a standard language (e.g. Python)? This benchmark only gets its full interest if the code is publicised and well engineered. The open-sourcing is planned, according to footnote 1, is planned -- but this should be made clearer in the main text. There is no discussion of the engineering quality, not even of the language used, and this is quite important if the authors want the community to build upon this work. The code was not submitted for review, and as such its accessibility to new contributors is unknown to this reviewer. That could be a make or break feature of this work.
* Is the hyper parameter tuning reproducible? Hyperparameter tuning should be discussed much more clearly (in the Appendix): while I appreciate the discussion page 8 of how they were frozen across datasets, "they were chosen through careful tuning" is way too short. What kind of tuning? Was it manual, and hence not reproducible? Or was it a clear, reproducible grid search or optimiser? I thoroughly hope for the later, otherwise an unreproducible benchmark would be very
If the answers to the two questions above is "YES", then brilliant article, I am ready to increase my score. However, if either is a "NO", I am afraid that would limit to how much this benchmark will serve as a reference (as opposed to "just one interesting datapoint").
Minor improvements:
* Please proofread some obvious typos:
- page 4 "suggesed" -> "suggested",
- page 8 runaway math environment wreaking the end of the sentence.
- reference "Meire Fortunato (2017)" should be "Fortunato et al. (2017)", throughout.
* Improve readability of figures' legends, e.g. Figure 2.(b) key is un-readable.
* A simple table mapping the name of the algorithm to the corresponding article is missing. Not everyone knows what BBB and BBBN stands for.
* A measure of wall time would be needed: while computational cost is often mentioned (especially as a drawback to getting proper performance out of variational inference), it is nowhere plotted. Of course that would partly depend on the quality of the implementation, but this is somewhat mitigated if all the algorithms have been reimplemented by the authors (is that the case? please clarify). |
iclr_2018_SyVOjfbRb | Workshop track -ICLR 2018 LSH-SAMPLING BREAKS THE COMPUTA- TIONAL CHICKEN-AND-EGG LOOP IN ADAP- TIVE STOCHASTIC GRADIENT ESTIMATION
Stochastic Gradient Descent or SGD is the most popular optimization algorithm for large-scale problems. SGD estimates the gradient by uniform sampling with sample size one. There have been several other works that suggest faster epoch wise convergence by using weighted non-uniform sampling for better gradient estimates. Unfortunately, the per-iteration cost of maintaining this adaptive distribution for gradient estimation is more than calculating the full gradient. As a result, the false impression of faster convergence in iterations leads to slower convergence in time, which we call a chicken-and-egg loop. In this paper, we break this barrier by providing the first demonstration of a sampling scheme, which leads to superior gradient estimation, while keeping the sampling cost per iteration similar to that of the uniform sampling. Such an algorithm is possible due to the sampling view of Locality Sensitive Hashing (LSH), which came to light recently. As a consequence of superior and fast estimation, we reduce the running time of all existing gradient descent algorithms. We demonstrate the benefits of our proposal on both SGD and AdaGrad. | The main contribution of this work is just a combination of LSH schemes and SGD updates. Since hashing schemes essentially reduce the dimension, LSH brings computational benefits to the SGD operation. The targeted issue is fundamentally important, and the proposed approach (exploiting LSH schemes) seems to be sound. Specifically, LSH schemes fit into the SGD schemes since they hash two vectors to the same bucket with probability in proportional to their distance (here, inner product or Cosine similarity).
Strengths: a sound approach; a simple and straightforward idea that is shown to work well in evaluations.
Weaknesses:
1. The phrase of "computational chicken-and-egg loop" in the title and also in the main body is misleading and not accurate. The so-called "chicken-and-egg” issue concerns the causality dilemma: two causally related things, which comes the first. In the paper, the authors concerned "more accurate gradients" and "faster convergence"; their causality is very clear (the first leads to the second), and there is no causality dilemma. Even from a computational perspective, "SDG schemes aim for computational efficiency" and "stochastic makes the convergence slow down" are not a causality dilemma. The reason behind is that the latter is the cost of the first one, just the old saying that "there is no such thing as a free lunch". Therefore, this disordered logic makes the title very misleading, and all the corresponding descriptions in the main body are obscured by "twisted" and unnatural logics.
2. The depth is so limited. Besides a good observation that LSH fits well into SDG, there are no more in-depth results provided. The theorems (Theorems 1~3) are trivial, with loose relations with LSH.
3. The LSH schemes are not correctly referred to. Since the similarity metric is inner-product, the authors are expected to refer to Cosine similarity and inner-product based LSHs, which were published recently in NIPS. It is not in depth to assume "any known LSH scheme" in Alg. 2. Accordingly again, Theorems 1~3 are unrelated with this specific kind of similarity metric (Cosine similarity).
4. As the authors tried hard to stick to the unnecessary (a bit bragging) phrase "computational chicken-and-egg loop", the organization and presentation of the whole manuscript are poor.
5. Occasionally, there are typos, and it is not good to use words in formulas. Please proof-read carefully. |
iclr_2018_rJ7RBNe0- | We examine how learning from unaligned data can improve both the data efficiency of supervised tasks as well as enable alignments without any supervision. For example, consider unsupervised machine translation: the input is two corpora of English and French, and the task is to translate from one language to the other but without any pairs of English and French sentences. To address this, we develop feature matching auto-encoders (FMAEs). FMAEs ensure that the marginal distribution of feature layers are preserved across forward and inverse mappings between domains. We show that FMAEs achieve state of the art for data efficiency and alignment across three tasks: text decipherment, sentiment transfer, and neural machine translation for English-to-German and English-to-French. Most compellingly, FMAEs achieve state of the art for semi-supervised neural machine translation with significant BLEU score differences of up to 5.7 and 6.3 over traditional supervised models. Furthermore, on English-to-German, FMAEs outperform last year's best models such as ByteNet (Kalchbrenner et al., 2016) while using only half as many supervised examples. | This paper proposes a generative model called matching auto-encoder to carry out the learning from unaligned data.
However, it is very disappointed to read the contents after the introduction, since most of the contributions are overclaimed.
Detailed comments:
- Figure 1 is incorrect because the pairs (x, z) and (y, z) should be put into two different plates if x and y are unaligned.
- Lots of contents in Sec. 3 are confusing to me. What is the difference between g_l(x) and g_l(y) if g_l : H_{l−1} → H_l and f_l: H_{l−1} → H_l are the same? What are e_x and e_y? Why is there a λ if it is a generative model?
- If the title is called 'text decipherment', there should be no parallel data at all, otherwise it is a huge overclaim on the decipherment tasks. Please add citations of Kevin Knight's recent papers on deciperment.
- Reading the experiment results of 'Sentiment Transfer' is a disaster to me. I couldn't get much information on 'sentiment transfer' from a bunch of ungrammatical unnatural language sentences. I would prefer to see some results of baseline models for comparison instead of a pure qualitative analysis.
- The claim on "FMAEs are state of the art for neural machine translation with limited supervision on EN-DE and EN-FR" is not exciting to me. Semi-supervised learning is interesting, but in the scenario of MT we do have enough parallel data for many language pairs. Unless the model is able to exceed the 'real' state-of-the-art that uses the full set of parallel data, otherwise we couldn't identify whether the models are able to benefit NMT. Interestingly, the authors didn't provide any of the results that are experimented with full parallel data set. Possibly it is because the introduction of stochastic variables that prevent the models from overfitting on small datasets. |
iclr_2018_H1q-TM-AW | A DIRT-T APPROACH TO UNSUPERVISED DOMAIN ADAPTATION
Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint, 2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain. In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes violation of the cluster assumption; 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) 1 model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation benchmarks. | As there are many kinds of domain adaptation problems, the need to mix several learning strategies to improve the existing approaches is obvious. However, this task is not necessarily easy to succeed. The authors proposed a sound approach to learn a proper representation (in an adversarial way) and comply the cluster assumption.
The experiments show that this Virtual Adversarial Domain Adaptation network (VADA) achieves great results when compared to existing learning algorithms. Moreover, we also see the learned model is consistently improved using the proposed "Decision-boundary Iterative Refinement Training with a Teacher" (DIRT-T) approach.
The proposed methodology relies on multiple choices that could sometimes be better studied and/or explained. Namely, I would like to empirically see which role of the locally-Lipschitz regularization term (Equation 7). Also, I wonder why this term is tuned by an hyperparameter (lamda_s) for the source, while a single hyperparamer (lambda_t) is used for the sum of the two target quantity.
On the theoretical side, the discussion could be improved. Namely, Section 3 about "limitation of domain adversarial training" correctly explained that "domain adversarial training may not be sufficient for domain adaptation if the feature extraction function has high-capacity". It would be interesting to explain whether this observation is consistent with Theorem 1 of the paper (due to Ben-David et al., 2010), on which several domain adversarial approaches are based. The need to consider supplementary assumptions (such as ) to achieve good adaptation can also be studied through the lens of more recent Ben-David's work, e.g. Ben-David and Urner (2014). In the latter, the notion of "Probabilistic Lipschitzness", which is a relaxation of the "cluster assumption" seems very related to the actual work.
Reference:
Ben-David and Urner. Domain adaptation-can quantity compensate for quality?, Ann. Math. Artif. Intell., 2014
Pros:
- Propose a sound approach to mix two complementary strategies for domain adaptation.
- Great empirical results.
Cons:
- Some choices leading to the optimization problem are not sufficiently explained.
- The theoretical discussion could be improved.
Typos:
- Equation 14: In the first term (target loss), theta should have an index t (I think).
- Bottom of page 6: "... and that as our validation set" (missing word). |
iclr_2018_BkM3ibZRW | While autoencoders are a key technique in representation learning for continuous structures, such as images or wave forms, developing general-purpose autoencoders for discrete structures, such as text sequence or discretized images, has proven to be more challenging. In particular, discrete inputs make it more difficult to learn a smooth encoder that preserves the complex local relationships in the input space. In this work, we propose an adversarially regularized autoencoder (ARAE) with the goal of learning more robust discrete-space representations. ARAE jointly trains both a rich discrete-space encoder, such as an RNN, and a simpler continuous space generator function, while using generative adversarial network (GAN) training to constrain the distributions to be similar. This method yields a smoother contracted code space that maps similar inputs to nearby codes, and also an implicit latent variable GAN model for generation. Experiments on text and discretized images demonstrate that the GAN model produces clean interpolations and captures the multimodality of the original space, and that the autoencoder produces improvements in semi-supervised learning as well as state-of-the-art results in unaligned text style transfer task using only a shared continuous-space representation. | I was asked to contribute this review rather late in the process, and in order
to remain unbiased I avoided reading other reviews. I apologize if some of
these comments have already been addressed in replies to other reviewers.
This paper proposes a regularization strategy for autoencoders that is very
similar to the adversarial autoencoder of Makhzani et al. The main difference
appears to be that rather than using the classic GAN loss to shape the
aggregate posterior of an autoencoder to match a chosen, fixed distribution,
they instead employ a Wasserstein GAN loss (and associated weight magnitude
constraint, presumably enforced with projected gradient descent) on a system
where the matched distribution is instead learned via a parameterized sampler
("generator" in the GAN lingo). Gradient steps that optimize the encoder,
decoder and generator are interleaved. The authors apply an extension of this
method to topic and sentiment transfer and show moderately good latent space
interpolations between generated sentences.
The difference from the original AAE is rather small and straightforward, making the
novelty mainly in the choice of task, focusing on discrete vectors and sequences.
The exposition leaves ample room for improvement. For one thing, there is the
irksome and repeated use of "discrete structure" when discrete *sequences* are
considered almost exclusively (with the exception of discretized MNIST digits).
The paper is also light on discussion of related work other than Makhzani et al
-- the wealth of literature on combining autoencoders (or autoencoder-like
structures such as ALI/BiGAN) and GANs merits at least passing mention.
The empirical work is somewhat compelling, though I am not an expert in this
task domain. The annealed importance sampling technique of Wu et al (2017) for
estimating bounds on a generator's log likelihood could be easily applied in
this setting and would give (for example, on binarized MNIST) a quantitative
measurement of the degree of overfitting, and this would have been preferable
than inventing new heuristic measures. The "Reverse PPL" metric requires more
justification, and it looks an awful lot like the long-since-discredited Parzen
window density estimation technique used in the original GAN paper.
High-level comments:
- It's not clear why the optimization is done in 3 separate steps. Aside
from the WGAN critic needing to be optimized for more steps, couldn't the
remaining components be trained jointly, with a weighted sum of terms for the
encoder?
- In section 2, "This [pre-training or co-training with maximum likelihood]
precludes there being a latent encoding of the sentence." It is not at all
clear to me why this would be the case.
- "One benefit of the ARAE framework is that it compresses the input to a
single code vector." This is true of any autoencoder.
- It would be worth explaining, in a sentence, the approach in Shen et al for
those who are not familiar with it, seeing as it is used as a baseline.
- We are told that the encoder's output is l2-normalized but the generator's
is not, instead output units of the generator are squashed with the tanh
activation. The motivation for this choice would be helpful. Shortly
thereafter we are told that the generator quickly learns to produce norm 1
outputs as evidence that it is matching the encoder's distribution, but this
is something that could have just as easily have been built-in, and is a
trivial sort of "distribution matching"
- In general, tables that report averages would do well to report error bars as
well. In general some more nuanced statistical analysis of these results
would be worthwhile, especially where they concern human ratings.
- The dataaset fractions chosen for the semi-supervised experience seem
completely arbitrary. Is this protocol derived from some other source?
Putting these in a table along with the results would improve readability.
- Linear interpolation in latent space may not be the best choice here
seeing as e.g. for a Gaussian code the region near the origin has rather low
probability. Spherical interpolation as recommended by White (2016) may
improve qualitative results.
- For the interpolation results you say "we output the argmax", what is meant?
Is beam search performed in the case of sequences?
- Finally, a minor point: I will challenge the authors to justify their claim
that the learned generative model is "useful" (their word). Interpolating
between two sentences sampled from the prior is a neat parlour trick, but the
model as-is has little utility. Even some speculation on how this aspect
could be applied would be appreciated (admittedly, many GAN papers could use
some reflection of this sort). |
iclr_2018_SksY3deAW | We prove a multiclass boosting theory for the ResNet architectures which simultaneously creates a new technique for multiclass boosting and provides a new algorithm for ResNet-style architectures. Our proposed training algorithm, BoostResNet, is particularly suitable in non-differentiable architectures. Our method only requires the relatively inexpensive sequential training of T "shallow ResNets". We prove that the training error decays exponentially with the depth T if the weak module classifiers that we train perform slightly better than some weak baseline. In other words, we propose a weak learning condition and prove a boosting theory for ResNet under the weak learning condition. A generalization error bound based on margin theory is proved and suggests that ResNet could be resistant to overfitting using a network with l 1 norm bounded weights. | Summary:
This paper considers a learning method for the ResNet using the boosting framework. More precisely, the authors view the structure of the ResNet as a (weighted) sum of base networks (weak hypotheses) and apply the boosting framework. The merit of this approach is to decompose the learning of complex networks to that of small to large networks in a moderate way and it uses less computational costs. The experimental results are good. The authors also show training and generalization error bounds for the proposed approach.
Comments:
The idea of the paper is natural and interesting. Experimental results are somewhat impressive. However, I am afraid that theoretical results in the paper contain several mistakes and does not hold. The details are below.
I think the proof of Theorem 4.2 is wrong. More precisely, there are several possibly wrong arguments as follows:
- In the proof, \alpha_t+1 is chosen so as to minimize an upper bound of Z_t, while the actual algorithm is chosen to minimize Z_t. The minimizer of Z_t and that of an upper bound are different in general. So, the obtained upper bound does not hold for the training error of the actual algorithm.
- It is not a mistake, but, there is no explanation why the equality between (27) and (28) holds. Please add an explanation. Indeed, equation (21) matters.
Also, the statement of Theorem 4.2 looks somewhat cheating: The statement seems to say that it holds for any iteration T and the training error decays exponentially w.r.t. T. However, the parameter T is determined by the parameter gamma, so it is some particular iteration, which might be small and the bound could be large.
The generalization error bound Corollary 4.3 seems to be wrong, too. More precisely, Lemma 2 of Cortes et al. is OK, but the application of Lemma 2 is not. In particular, the proof does not take into account of the function \sigma. In other words, the proof considers the Rademacher complexity R_m(\calF_t), of the class \calF_t, but, acutually, I think it should consider R_m(\sigma(\calF_t)), where the class \sigma(\calF_t) consists of the composition of functions \sigma and f_t in \calF_t. Talagrand’s lemma (see, e.g., Mohri et al.’ s book: Foundation of Machine Learning) can be used to analyze the complexity of the composite class. But, the resulting bound would depend on the Lipschizness of \sigma in an exponential way.
The explanation of the generalization ability is not sufficient. While the latter weak hypotheses are complex enough and would have large edges, the complexity of the function class of weak hypotheses grows exponentially w.r.t. the iteration T, which should be mentioned.
As a summary, the paper contains nice ideas and experimental results are promising, but has non-negligible mistakes in theoretical parts which degrade the contribution of the paper.
Minor Comments:
-In Algorithm 1, \gamma_t is not defined when a while-loop starts. So, the condition of the while-loop cannot be checked. |
iclr_2018_rJGZq6g0- | Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization. | --------------
Summary and Evaluation:
--------------
The paper presents a nice set of experiments on language emergence in a mutli-modal, multi-step setting. The multi-modal reference game provides an interesting setting for communication, with agents learning to map descriptions to images. The receiving agent's direct control over dialog length is also novel and allows for the interesting analysis presented in later sections.
Overall I think this is an interesting and well-designed work; however, some details are missing that I think would make for a stronger submission (see weaknesses).
--------------
Strengths:
--------------
- Generally well-written with the Results and Analysis section appearing especially thought-out and nicely presented.
- The proposed reference game provides a number of novel contributions -- giving the agents control over dialog length, providing both agents with the same vocabulary without constraints on how each uses it (implicit through pretraining or explicit in the structure/loss), and introducing an asymmetric multi-modal context for the dialog.
- The analysis is extensive and well-grounded in the three key hypothesis presented at the beginning of Section 6.
--------------
Weaknesses:
--------------
- There is room to improve the clarity of Sections 3 and 4 and I encourage the authors to revisit these sections. Some specific suggestions that might help:
- numbering all display style equations
- when describing the recurrent receiver, explain the case where it terminates (s^t=1) first such that P(o_r=1) is defined prior to being used in the message generation equation.
- I did not see an argument in support of the accuracy@K metric. Why is putting the ground truth in the top 10% the appropriate metric in this setting? Is it to enable comparison between the in-domain, out-domain, and transfer settings?
- Unless I missed something, the transfer test set results only comes up once in the context of attention methods and are not mentioned elsewhere. Why is this? It seems appropriate to include in Figure 5 if no where else in the analysis.
- Do the authors have a sense for how sensitive these results are to different runs of the training process?
- I did not understand this line from Section 5.1: "and discarding any image with a category beyond the 398-th most frequent one, as classified by a pretrained ImageNet classifier'"
- It is not specified (or I missed it) whether the F1 scores from the separate classifier are from training or test set evaluations.
- I would have liked to see analysis on the training process such as a plot of reward (or baseline adjusted reward) over training iterations.
- I encourage authors to see the EMNLP 2017 paper "Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog" which also perform multi-round dialogs between two agents. Like this work, the authors also proposed removing memory from one of the agents as a means to avoid learning degenerate 'non-dialog' protocols.
- Very minor point: the use of fixed-length, non-sequence style utterances is somewhat disappointing given the other steps made in the paper to make the reference game more 'human like' such as early termination, shared vocabularies, and unconstrained utterance types. I understand however that this is left as future work.
--------------
Curiosities:
--------------
- I think the analysis is Figure 3 b,c is interesting and wonder if something similar can be computed over all examples. One option would be to plot accuracy@k for different utterance indexes -- essentially forcing the model to make a prediction after each round of dialog (or simply repeating its prediction if the model has chosen to stop). |
iclr_2018_BJvWjcgAZ | We propose Episodic Backward Update -a new algorithm to boost the performance of a deep reinforcement learning agent by fast reward propagation. In contrast to the conventional use of replay memory with uniform random sampling, our agent samples a whole episode and successively propagates the value of a state into its previous states. Our computationally efficient recursive algorithm allows sparse and delayed rewards to propagate effectively throughout the sampled episode. We evaluate our algorithm on 2D MNIST Maze Environment and 49 games of the Atari 2600 Environment and show that our agent improves sample efficiency with a competitive computational cost. | This paper proposes a new variant of DQN where the DQN targets are computed on a full episode by a « backward » update (i.e. from end to start of episode). The targets’ update rule is similar to a regular tabular Q-learning update with high learning rate beta: this allows faster propagation of rewards obtained at the end of the episode (while beta=0 corresponds to regular DQN with no such reward propagation). This mechanism is shown to improve on Q-learning in a toy 2D maze environment (with MNIST-based pixel states providing cell coordinates) with beta=1, and on DQN and its optimality tightening variant on Atari games with beta=0.5.
The intuition behind the algorithm (that one should try to speed up the propagation of rewards across multiple steps) is not new, in fact it has inspired other approaches like n-step Q-learning, eligibility traces or more recently Retrace(lambda) in deep RL. Actually the idea of replaying experiences in backward order can be traced back to the origins of experience replay (« Programming Robots Using Reinforcement Learning and Teaching », Lin, 1991), something that is not mentioned here. That being said, to the best of my knowledge the specific algorithm proposed in this submission (Alg. 2) is novel, even if Alg. 1 is not (Alg. 1 can be seen as a specific instance of Lin’s algorithm with a very high learning rate, and clearly only makes sense in toy deterministic environments).
In the absence of any theoretical analysis of the proposed approach, I would have expected an in-depth empirical validation. Unfortunately this is not the case here. In the toy environment (4.1) I am surprised by the really poor quality of the results (paths 5-10 times longer than the shortest path on average): have algorithms been run for a long enough time? Or maybe the average is a bad performance measure due to outliers? I would have also appreciated a comparison to Retrace(lambda), which is a more principled way to use multi-step rewards than n-step Q-learning (which is technically an on-policy method). Similar remarks can be made on the Atari experiments (4.2), where 10M frames is really low (the original DQN paper had results on 50M frames, and Rainbow reports 200M frames in only ~2x the training time reported here). The comparison also should have included prioritized experience replay, which has been shown to provide a significant boost in DQN, but may be tricky to combine with the proposed algorithm. Overall comparing only to vanilla DQN and its optimality tightening variant is too limited when there have been so many other meaningful improvements over DQN. This makes it really hard to tell whether the proposed algorithm would actually help when combined with a state-of-the-art method like Rainbow for instance.
A few additional small remarks and questions:
- « Second, there is no point in updating a one-step transition unless the future transitions have not been updated yet. »: should « unless » be replaced by « if »?
- In 4.1 is there a maximum number of steps per episode and can you please confirm that training is done independently for each maze?
- Typo in eq. 3: the - in the max should be a comma
- There is a good amount of typos and grammar errors, though they do not harm the readability of the paper
- Citations for « Deep Reinforcement Learning with Double Q-learning » and « Dueling Network Architectures for Deep Reinforcement Learning » could refer to their conference versions
- « epsilon starts from 1 and is annealed to 0 at 200,000 steps in a quadratic manner »: please specify the exact formula
- Fig. 7 is really confusing, there seem to be typos and it is not clear why the beta updates appear in these specific cells, please revise it if you want to keep it |
iclr_2018_Bk7wvW-C- | Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. As a result, we build an encoderdecoder architecture with an RNN encoder and a CNN decoder, and we show that neither an autoregressive decoder nor an RNN decoder is required. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabeled corpora, and in both cases transferability is evaluated on a set of downstream language understanding tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks. | -- updates to review: --
Thanks for trying to respond to my comments. I find the new results very interesting and fill in some empirical gaps that I think were worth investigating. I'm now more confident that this paper is worth publishing and I increased my rating from 6 to 7.
I admit that this is a pretty NLP-specific paper, but to the extent that ICLR has core NLP papers (I think it does have some), I think the paper is a reasonable fit for ICLR. It might feel more at home at a *ACL conference though.
-- original review is below: --
This paper is about modifications to the skip-thought framework for learning sentence embeddings. The results show performance comparable to or better than skip-thought while decreasing training time.
I think the overall approach makes sense: use an RNN encoder because we know it works well, but improve training efficiency by changing the decoder to a combination of feed-forward and convolutional layers.
I think it may be the case that this works well because the decoder is not auto-regressive but merely predicts each word independently. This is possible because the decoder will not be used after training. So all the words can be predicted all at once with a fixed maximum sentence length. In typical encoder-decoder applications, the decoder is used at test time to get predictions, so it is natural to make it auto-regressive. But in this case, the decoder is thrown away after training, so it makes more sense to make the decoder non-auto-regressive. I think this point should be made in the paper.
Also, I think it's worth noting that an RNN decoder could be used in a non-auto-regressive architecture as well. That is, the sentence encoding could be mapped to a sequence of length 30 as is done with the CNN decoder currently; then a (multi-layer) BiLSTM could be run over that sequence, and then a softmax classifier can be attached to each hidden vector to predict the word at that position. It would be interesting to compare that BiLSTM decoder with the proposed CNN decoder and also to compare it to a skip-thought-style auto-regressive RNN decoder. This would let us understand whether the benefit is coming more from the non-auto-regressive nature of the decoder or from the CNN vs RNN differences.
That is, it would make sense to factor the decision of decoder design along multiple axes. One axis could be auto-regressive vs predict-all-words. Another axis could be using a CNN over the sequence of word positions or an RNN over the sequence of word positions. For auto-regressive models, another axis could be train using previous ground-truth word vs train using previous predicted word. Skip-thought corresponds to an auto-regressive RNN (using the previous ground-truth word IIRC). The proposed decoder is a predict-all-words CNN. It would be natural to also experiment with an auto-regressive CNN and a predict-all-words RNN (like what I described in the paragraph above). The paper is choosing a single point in the space and referring to it as a "CNN decoder" whereas there are many possible architectures that can be described this way and I think it would strengthen the paper to increase the precision in discussing the architecture and possible alternatives.
Overall, I think the architectural choices and results are strong enough to merit publication. Adding any of the above empirical comparisons would further strengthen the paper.
However, I did have quibbles with some of the exposition and some of the claims made throughout the paper. They are detailed below:
Sec. 2:
In the "Decoder" paragraph: please add more details about how the words are predicted. Are there final softmax layers that provide distributions over output words? I couldn't find this detail in the paper. What loss is minimized during training? Is it the sum of log losses over all words being predicted?
Sec. 3:
Section 3 does not add much to the paper. The motivations there are mostly suggestive rather than evidence-based. Section 3 could be condensed by about 80% or so without losing much information. Overall, the paper has more than 10 pages of content, and the use of 2 extra pages beyond the desired submission length of 8 should be better justified. I would recommend adding a few more details to Section 2 and removing most of Section 3. I'll mention below some problematic passages in Section 3 that should be removed.
Sec. 3.2:
"...this same constraint (if using RNN as the decoder) could be an inappropriate constraint in the decoding process." What is the justification or evidence for this claim? I think the claim should be supported by an argument or some evidence or else should be removed. If the authors intend the subsequent paragraphs to justify the claim, then see my next comments.
Sec. 3.2:
"The existence of the ground-truth current word embedding potentially decreases the tendency for the decoder to exploit other information from the sentence representation."
But this is not necessarily an inherent limitation of RNN decoders since it could be addressed by using the embedding of the previously-predicted word rather than the ground-truth word. This is a standard technique in sequence-to-sequence learning; cf. scheduled sampling (Bengio et al., 2015).
Sec. 3.2:
"Although the word order information is implicitly encoded in the CNN decoder, it is not emphasized as it is explicitly in the RNN decoder. The CNN decoder cares about the quality of generated sequences globally instead of the quality of the next generated word. Relaxing the emphasis on the next word, may help the CNN decoder model to explore the contribution of context in a larger space."
Again, I don't see any evidence or justification for these arguments. Also see my discussion above about decoder variations; these are not properties of RNNs vs CNNs but rather properties of auto-regressive vs predict-all-words decoders.
Sec. 5.2-5.3:
There are a few high-level decisions being tuned on the test sets for some of the tasks, e.g., the length of target sequences in Section 5.2 and the number of layers and channel size in Section 5.3.
Sec. 5.4:
When trying to explain why an RNN encoder works better than a CNN encoder, the paper includes the following: "We stated above that, in our belief, explicit usage of the word order information will augment the transferability of the encoder, and constrain the search space of the parameters in the encoder. The results match our belief."
I don't think these beliefs are concrete enough to be upheld or contradicted. Both encoders explicitly use word order information. Can you provide some formal or theoretical statement about how the two encoders treat word order differently? I fear that it's only impressions and suppositions that lead to this difference, rather than necessarily something formal.
Sec. 5.4:
In Table 1, it is unclear why the "future predictor" model is the one selected to be reported from Gan et al (2017). Gan et al has many settings and the "future predictor" setting is the worst. An explanation is needed for this choice.
Sec. 6.1:
In the "BYTE m-LSTM" paragraph:
"Our large RNN-CNN model trained on Amazon Book Review (the largest subset of Amazon Review) performs on par with BYTE m-LSTM model, and ours works better than theirs on semantic relatedness and entailment tasks." I'm not sure this "on par" assessment is warranted by the results in Table 2. BYTE m-LSTM is better on MR by 1.6 points and better on CR by 4.7 points. The authors' method is better on SUBJ by 0.7 and better on MPQA by 0.5. So on sentiment tasks, BYTE m-LSTM is clearly better, and on the other tasks the RNN-CNN is typically better, especially on SICK-r.
More minor things are below:
Sec. 1:
The paper contains this: "The idea of learning from the context information was first successfully applied to vector representation learning for words in Mikolov et al. (2013b)"
I don't think this is accurate. When restricting attention to neural network methods, it would be more correct to give credit to Collobert et al. (2011). But moving beyond neural methods, there were decades of previous work in using context information (counts of context words) to produce vector representations of words.
typo: "which d reduces" --> "which reduces"
Sec. 2:
The notation in the text doesn't match that in Figure 1: w_i^1 vs. w_1 and h_i^1 vs h_1.
Instead of writing "non-parametric composition function", describe it as "parameter-free". "Non-parametric" means that the number of parameters grows with the data, not that there are no parameters.
In the "Representation" paragraph: how do you compute a max over vectors? Is it a separate max for each dimension? This is not clear from the notation used.
Sec. 3.1:
inappropriate word choice: the use of "great" in "a great and efficient encoding model"
Sec. 3.2:
inappropriate word choice: the use of "unveiled" in "is still to be unveiled"
Sec. 3.4:
Tying input and output embeddings can be justified with a single sentence and the relevant citations (which are present here). There is no need for speculation about what may be going on, e.g.: "the model learns to explore the non-linear compositionality of the input words and the uncertain contribution of the target words in the same space".
Sec. 4:
I think STS14 should be defined and cited where the other tasks are described.
Sec. 5.3:
typo in Figure 2 caption: "and and"
Sec. 6.1:
In the "Skip-thought" paragraph:
inappropriate word choice: "kindly"
The description that says "we cut off a branch for decoding" is not clear to me. What is a "branch for decoding" in this context? Please modify it to make it more clear.
References:
Bengio S, Vinyals O, Jaitly N, Shazeer N. Scheduled sampling for sequence prediction with recurrent neural networks. NIPS 2015.
Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P. Natural language processing (almost) from scratch. Journal of Machine Learning Research 2011. |
iclr_2018_B13njo1R- | Published as a conference paper at ICLR 2018 PROGRESSIVE REINFORCEMENT LEARNING WITH DISTILLATION FOR MULTI-SKILLED MOTION CONTROL
Deep reinforcement learning has demonstrated increasing capabilities for continuous control problems, including agents that can move with skill and agility through their environment. An open problem in this setting is that of developing good strategies for integrating or merging policies for multiple skills, where each individual skill is a specialist in a specific skill and its associated state distribution. We extend policy distillation methods to the continuous action setting and leverage this technique to combine expert policies, as evaluated in the domain of simulated bipedal locomotion across different classes of terrain. We also introduce an input injection method for augmenting an existing policy network to exploit new input features. Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills. The combination of these methods allows a policy to be incrementally augmented with new skills. We compare our progressive learning and integration via distillation (PLAID) method against three alternative baselines. | Hi,
This was a nice read. I think overall it is a good idea. But I find the paper lacking a lot of details and to some extend confusing.
Here are a few comments that I have:
Figure 2 is very confusing for me. Please first of all make the figures much larger. ICLR does not have a strict page limit, and the figures you have are hard to impossible to read. So you train in (a) on the steps task until 350k steps? Is (b), (d),(c) in a sequence or is testing moving from plain to different things? The plot does not explicitly account for the distillation phase. Or at least not in an intuitive way. But if the goal is transfer, then actually PLAID is slower than the MultiTasker because it has an additional cost to pay (in frames and times) for the distillation phase right? Or is this counted.
Going then to Figure 3, I almost fill that the MultiTasker might be used to simulate two separate baselines. Indeed, because the retention of tasks is done by distilling all of them jointly, one baseline is to keep finetuning a model through the 5 stages, and then at the end after collecting the 5 policies you can do a single consolidation step that compresses all. So it will be quite important to know if the frequent integration steps of PLAID are helpful (do knowing 1,2 and 3 helps you learn 4 better? Or knowing 3 is enough).
Where exactly is input injection used? Is it experiments from figure 3. What input is injecting? What do you do when you go back to the task that doesn't have the input, feed 0? What happens if 0 has semantics ?
Please say in the main text that details in terms of architecture and so on are given in the appendix. And do try to copy a bit more of them in the main text where reasonable.
What is the role of PLAID? Is it to learn a continual learning solution? So if I have 100 tasks, do I need to do 100-way distillation at the end to consolidate all skills? Will this be feasible? Wouldn't the fact of having data from all the 100 tasks at the end contradict the traditional formulation of continual learning?
Or is it to obtain a multitask solution while maximizing transfer (where you always have access to all tasks, but you chose to sequentilize them to improve transfer)? And even then maximize transfer with respect to what? Frames required from the environment? If that are you reusing the frames you used during training to distill? Can we afford to keep all of those frames around? If not we have to count the distillation frames as well. Also more baselines are needed. A simple baseline is just finetunning as going from one task to another, and just at the end distill all the policies found through out the way. Or at least have a good argument of why this is suboptimal compared to PLAID.
I think the idea of the paper is interesting and I'm willing to increase (and indeed decrease) my score. But I want to make sure the authors put a bit more effort into cleaning up the paper, making it more clear and easy to read. Providing at least one more baseline (if not more considering the other things cited by them). |
iclr_2018_BkUp6GZRW | Published as a conference paper at ICLR 2018 BOOSTING THE ACTOR WITH DUAL CRITIC
This paper proposes a new actor-critic-style algorithm called Dual Actor-Critic or Dual-AC. It is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation, which can be viewed as a two-player game between the actor and a critic-like function, which is named as dual critic. Compared to its actor-critic relatives, Dual-AC has the desired property that the actor and dual critic are updated cooperatively to optimize the same objective function, providing a more transparent way for learning the critic that is directly related to the objective function of the actor. We then provide a concrete algorithm that can effectively solve the minimax optimization problem, using techniques of multistep bootstrapping, path regularization, and stochastic dual ascent algorithm. We demonstrate that the proposed algorithm achieves state-of-the-art performance across several benchmarks. | This paper proposes a method, Dual-AC, for optimizing the actor(policy) and critic(value function) simultaneously which takes the form of a zero-sum game resulting in a principled method for using the critic to optimize the actor. In order to achieve that, they take the linear programming approach of solving the bellman optimality equations, outline the deficiencies of this approach, and propose solutions to mitigate those problems. The discussion on the deficiencies of the naive LP approach is mostly well done. Their main contribution is extending the single step LP formulation to a multi-step dual form that reduces the bias and makes the connection between policy and value function optimization much clearer without loosing convexity by applying a regularization. They perform an empirical study in the Inverted Double Pendulum domain to conclude that their extended algorithm outperforms the naive linear programming approach without the improvements. Lastly, there are empirical experiments done to conclude the superior performance of Dual-AC in contrast to other actor-critic algorithms.
Overall, this paper could be a significant algorithmic contribution, with the caveat for some clarifications on the theory and experiments. Given these clarifications in an author response, I would be willing to increase the score.
For the theory, there are a few steps that need clarification and further clarification on novelty. For novelty, it is unclear if Theorem 2 and Theorem 3 are both being stated as novel results. It looks like Theorem 2 has already been shown in "Randomized Linear Programming Solves the Discounted Markov Decision Problem in Nearly-Linear Running Time”. There is a statement that “Chen & Wang (2016); Wang (2017) apply stochastic first-order algorithms (Nemirovski et al., 2009) for the one-step Lagrangian of the LP problem in reinforcement learning setting. However, as we discussed in Section 3, their algorithm is restricted to tabular parametrization”. Is you Theorem 2 somehow an extension? Is Theorem 3 completely new?
This is particularly called into question due to the lack of assumptions about the function class for value functions. It seems like the value function is required to be able to represent the true value function, which can be almost as restrictive as requiring tabular parameterizations (which can represent the true value function). This assumption seems to be used right at the bottom of Page 17, where U^{pi*} = V^*. Further, eta_v must be chosen to ensure that it does not affect (constrain) the optimal solution, which implies it might need to be very small. More about conditions on eta_v would be illuminating.
There is also one step in the theorem that I cannot verify. On Page 18, how is the squared removed for difference between U and Upi? The transition from the second line of the proof to the third line is not clear. It would also be good to more clearly state on page 14 how you get the first inequality, for || V^* ||_{2,mu}^2.
For the experiments, the following should be addressed.
1. It would have been better to also show the performance graphs with and without the improvements for multiple domains.
2. The central contribution is extending the single step LP to a multi-step formulation. It would be beneficial to empirically demonstrate how increasing k (the multi-step parameter) affects the performance gains.
3. Increasing k also comes at a computational cost. I would like to see some discussions on this and how long dual-AC takes to converge in comparison to the other algorithms tested (PPO and TRPO).
4. The authors concluded the presence of local convexity based on hessian inspection due to the use of path regularization. It was also mentioned that increasing the regularization parameter size increases the convergence rate. Empirically, how does changing the regularization parameter affect the performance in terms of reward maximization? In the experimental section of the appendix, it is mentioned that multiple regularization settings were tried but their performance is not mentioned. Also, for the regularization parameters that were tried, based on hessian inspection, did they all result in local convexity? A bit more discussion on these choices would be helpful.
Minor comments:
1. Page 2: In equation 5, there should not be a 'ds' in the dual variable constraint |
iclr_2018_rkeZRGbRW | We study how, in generative adversarial networks, variance in the discriminator's output affects the generator's ability to learn the data distribution. In particular, we contrast the results from various well-known techniques for training GANs when the discriminator is near-optimal and updated multiple times per update to the generator. As an alternative, we propose an additional method to train GANs by explicitly modeling the discriminator's output as a bi-modal Gaussian distribution over the real/fake indicator variables. In order to do this, we train the Gaussian classifier to match the target bi-modal distribution implicitly through metaadversarial training. We observe that our new method, when trained together with a strong discriminator, provides meaningful, non-vanishing gradients. | The authors provided empirical analysis of different variants of GANs and proposed a regularization scheme to combat the vanishing gradient when the discriminator is well trained.
More specifically, the authors demonstrated the importance of intra-class variance in the discriminator’s output. Methods whose discriminators tend to map inputs of a class to single real values are unable to provide a reliable learning signal for the generator, such as the standard GAN and Least Squares GAN. Variance in the discriminator’s output is essential to allow the generator to learn in the presence of a well-trained discriminator. To ensure the discriminator’s output follows the mixture of two univariate Gaussians, the authors proposed to add two additional discriminators which are trained in a similar was as the original GAN formulation. The technique is related to Linear Discriminant Analysis. From a broader perspective, the new meta-adversarial learning can be applied to ensure various desirable properties in GANs.
The performance of variance regularization scheme was evaluated on the CIFAR-10 and CelebA data.
Summary:
——
I think the paper discusses a very interesting topic and presents an interesting direction for training the GANs. A few points are missing which would provide significantly more value to readers. See comments below for details and other points.
Comments:
——
1. Why would a bi-modal distribution be meaningful? Deep nets implicitly transform the data which is probably much more effective than using complex bi-modal Gaussian distribution; the bi-modal concept can likely be captured using classical techniques.
2. On page 4, in Eq. (8) and (9), it remains unclear what $\mathcal{R}$ and $\mathcal{F}$ really are beyond two-layer MLPs; are the results of those two-layer MLPs used as the mean of a Gaussian distribution, i.e., $\mu_r$ and $\mu_f$?
3. Regarding the description above Eq. (12), what is really used in practice, i.e., in the experiments? The paper omits many details that seem important for understanding. Could the authors provide more details on choosing the generator loss function and why Eq. (12) provides satisfying results in practice?
Minor Comments:
——
1. In Sec 2.1, the sentence needs to be corrected: “As shown in Arjovsky & Bottou (2017), the JS divergence will be flat everywhere important if P and Q both lie on low-dimensional manifolds (as is likely the case with real data) and do not prefectly align.”
2. Last sentence in Conclusion: “which can be applied to ensure enforce various desirable properties in GANs.” Please remove either “ensure” or “enforce.” |
iclr_2018_Sy4c-3xRW | We propose DropMax, a stochastic version of softmax classifier which at each iteration drops non-target classes with some probability, for each instance. Specifically, we overlay binary masking variables over class output probabilities, which are learned based on the input via regularized variational inference. This stochastic regularization has an effect of building an ensemble classifier out of exponential number of classifiers with different decision boundaries. Moreover, the learning of dropout probabilities for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes. We validate our model on multiple public datasets for classification, on which it obtains improved accuracy over regular softmax classifier and other baselines. Further analysis of the learned dropout masks shows that our model indeed selects confusing classes more often when it performs classification. | Pros
- The proposed model is a nice way of multiplicatively combining two features :
one which determines which classes to pay attention to, and other that
provides useful features for discrimination.
- The adaptive component seems to provide improvements for small dataset sizes
and large number of classes.
Cons
- "One can easily see that if o_t(x; w) = 0, then class t becomes neutral in the
classification and the gradients are not back-propagated from it." : This does
not seem to be true. Even if the logits are zero, the class would have a
non-zero probability and would receive gradients. Do the authors mean
exp(o_t(x;w)) = 0 ?
- Related to the above, it should be clarified what is meant by dropping a
class. Is its logit set to zero or -\infty ? Excluding a class from the
softmax is equivalent to having a logit of -\infty, not zero. However, from the
equations in the paper it seems that the logit is set to zero. This would not
result in excluding the unit. The overall effect would just be to raise the
magnitude of logits across the entire softmax.
- It seems that the model benefits from at least two separate effects - one is
the attention mechanism provided by the sigmoids, and the other is the
stochasticity during training. Presently, it is not clear if only one of the
components is providing most of the benefits, or if both things are useful. It
would be great to compare this model to a non-stochastic one which just has the
multiplicative effects applied in a deterministic way (during both training and
testing).
- The objective of the attention mechanism that sets the dropout mask seems to
be the same as the primary objective of classifying the input, and the
attention mechanism is prevented from solving the task by adding an extra
entropy regularization. It would be useful to explain more why this is needed.
Would it not be fine if the attention mechanism did a perfect job of selecting
the class ?
Quality
The paper makes relevant comparisons and is overall well-motivated. However,
some aspects of the paper can be improved by adding more explanations.
Clarity
Some crucial aspects of the paper are unclear as mentioned above.
Originality
The main contribution of the paper is similar to multiplicative gating. The
added stochasticity and the model ensembling interpretation is probably novel.
However, experiments are insufficient to determine whether it is this novelty
that contributes to improved performance or just the gating.
Significance
This paper makes incremental improvements and would be of moderate interest to
the machine learning community.
Typos :
- In Eq 3, the numerator has z_t. Should that be z_y ?
- In Eq 5, the denominator has z_y. Should that be z_t ? |
iclr_2018_r1wEFyWCW | FEW-SHOT AUTOREGRESSIVE DENSITY ESTIMATION: TOWARDS LEARNING TO LEARN DISTRIBUTIONS
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset. | This paper focuses on the density estimation when the amount of data available for training is low. The main idea is that a meta-learning model must be learnt, which learns to generate novel density distributions by learn to adapt a basic model on few new samples. The paper presents two independent method.
The first method is effectively a PixelCNN combined with an attention module. Specifically, the support set is convolved to generate two sets of feature maps, the so called "key" and the "value" feature maps. The key feature map is used from the model to compute the attention in particular regions in the support images to generate the pixels for the new "target" image. The value feature maps are used to copmpute the local encoding, which is used to generate the respective pixels for the new target image, taking into account also the attention values. The second method is simpler, and very similar to fine-tuning the basis network on the few new samples provided during training. Despite some interesting elements, the paper has problems.
First, the novelty is rather limited. The first method seems to be slightly more novel, although it is unclear whether the contribution by combining different models is significant. The second method is too similar to fine-tuning: although the authors claim that \mathcal{L}_inner can be any function that minimizes the total loss \mathcal{L}, in the end it is clear that the log-likelihood is used. How is this approach (much) different from standard fine-tuning, since the quantity P(x; \theta') is anyways unknown and cannot be "trained" to be maximized.
Besides the limited novelty, the submission leaves several parts unclear. First, why are the convolutional features of the support set in the first methods divided into "key" and "value" feature maps as in p_key=p[:, 0:P], p_value=p[:, P:2*P]? Is this division arbitrary, or is there a more basic reason? Also, is there any different between key and value? Why not use the same feature map for computing the attention and computing eq (7)?
Also, in the first model it is suggested that an additional feature can be having a 1-of-K channel for the supporting image label: the reason is that you might have multiple views of objects, and knowing which view contributes to the attention can help learning the density. However, this assumes that the views are ordered, namely that the recording stage has a very particular format. Isn't this a bit unrealistic, given the proposed setup anyways?
Regarding the second method, it is not clear why leaving this room for flexibility (by allowing L_inner to be any function) to the model is a good idea. Isn't this effectively opening the doors to massive overfitting? Besides, isn't the statement that the function \mathcal{L}_inner void? At the end of the day one can also claim the same for gradient descent: you don't need to have the true gradients of the true loss, as long as the objective function obtains gradually lower and lower values?
Last, it is unclear what is the connection between the first and the second model. Are these two independent models that solve the same problem? Or are they connected?
Regarding the evaluation of the models, the nature of the task makes the evaluation hard: for real data like images one cannot know the true distribution of particular support examples. Surrogate tasks are explored, first image flipping, then likelihood estimation of Omniglot characters, then image generation. Image flipping does not sound a very relevant task to density estimation, given that the task is deterministic. Perhaps, what would make more sense would be to generate a new image given that the support set has images of a particular orientation, meaning that the model must learn how to learn densities from arbitrary rotations. Regarding Omniglot character generation, the surrogate task of computing likelihood of known samples gives a bit better, however, this is to be expected when combining a model without attention, with an attention module.
All in all, the paper has some interesting ideas. I encourage the authors to work more on their submission and think of a better evaluation and resubmit. |
iclr_2018_rkZzY-lCb | Methods that calculate dense vector representations for features in unstructured data-such as words in a document-have proven to be very successful for knowledge representation. We study how to estimate dense representations when multiple feature types exist within a dataset for supervised learning where explicit labels are available, as well as for unsupervised learning where there are no labels. Feat2Vec calculates embeddings for data with multiple feature types enforcing that all different feature types exist in a common space. In the supervised case, we show that our method has advantages over recently proposed methods; such as enabling higher prediction accuracy, and providing a way to avoid the cold-start problem. In the unsupervised case, our experiments suggest that Feat2Vec significantly outperforms existing algorithms that do not leverage the structure of the data. We believe that we are the first to propose a method for learning unsupervised embeddings that leverage the structure of multiple feature types. | SUMMARY.
The paper presents an extension of word2vec for structured features.
The authors introduced a new compatibility function between features and, as in the skipgram approach, they propose a variation of negative sampling to deal with structured features.
The learned representation of features is tested on a recommendation-like task.
----------
OVERALL JUDGMENT
The paper is not clear and thus I am not sure what I can learn from it.
From what is written on the paper I have trouble to understand the definition of the model the authors propose and also an actual NLP task where the representation induced by the model can be useful.
For this reason, I would suggest the authors make clear with a more formal notation, and the use of examples, what the model is supposed to achieve.
----------
DETAILED COMMENTS
When the authors refer to word2vec is not clear if they are referring to skipgram or cbow algorithm, please make it clear.
Bottom of page one: "a positive example is 'semantic'", please, use another expression to describe observable examples, 'semantic' does not make sense in this context.
Levi and Goldberg (2014) do not say anything about factorization machines, could the authors clarify this point?
Equation (4), what do i and j stand for? what does \beta represent? is it the embedding vector? How is this formula related to skipgram or cbow?
The introduction of structured deep-in factorization machine should be more clear with examples that give the intuition on the rationale of the model.
The experimental section is rather poor, first, the authors only compare themselves with word2ve (cbow), it is not clear what the reader should learn from the results the authors got.
Finally, the most striking flaw of this paper is the lack of references to previous works on word embeddings and feature representation, I would suggest the author check and compare themselves with previous work on this topic. |
iclr_2018_HyFaiGbCW | We investigate the methods by which a Reservoir Computing Network (RCN) learns concepts such as 'similar' and 'different' between pairs of images using a small training dataset and generalizes these concepts to previously unseen types of data. Specifically, we show that an RCN trained to identify relationships between image-pairs drawn from a subset of digits from the MNIST database or the depth maps of subset of visual scenes from a moving camera generalizes the learned transformations to images of digits unseen during training or depth maps of different visual scenes. We infer, using Principal Component Analysis, that the high dimensional reservoir states generated from an input image pair with a specific transformation converge over time to a unique relationship. Thus, as opposed to training the entire high dimensional reservoir state, the reservoir only needs to train on these unique relationships, allowing the reservoir to perform well with very few training examples. Thus, generalization of learning to unseen images is interpretable in terms of clustering of the reservoir state onto the attractor corresponding to the transformation in reservoir space. We find that RCNs can identify and generalize linear and non-linear transformations, and combinations of transformations, naturally and be a robust and effective image classifier. Additionally, RCNs perform significantly better than state of the art neural network classification techniques such as deep Siamese Neural Networks (SNNs) in generalization tasks both on the MNIST dataset and more complex depth maps of visual scenes from a moving camera. This work helps bridge the gap between explainable machine learning and biological learning through analogies using small datasets, and points to new directions in the investigation of learning processes. | The paper uses an echo state network to learn to classify image transformations (between pairs of images) into one of fives classes. The image data is artificially represented as a time series, and the goal is generalization of classification ability to unseen image pairs. The network dynamics are studied and are claimed to have explanatory power.
The paper is well-written and easy to follow, but I have concerns about the claims it makes relative to how convincing the results are. The focus is on one simple, and frankly now-overused data set (MNIST). Further, treating MNIST data as a time series is artificial and clunky. Why does the series go from left to right rather than right to left or top to bottom or inside out or something else? How do the results change if the data is "temporalized" in some other way?
For training in Section 2.4, is M the number of columns for a pair of images? It's not clear how pairs are input in parallel--- one after the other? Concatenated? Interleaved columns? Something else? What are k, i, j in computing $\delta X_k$? Later, in Section 3.2, it says, "As in section 2.2, $xl(mn)$ is the differential reservoir state value of the $m$th reservoir node at time $n$ for input image $l$", but nothing like this is discussed in Section 2.2; I'm confused.
The generalization results on this one simple data set seem pretty good. But, how does this kind of approach do on other kinds of or more complex data? I'm not sure that RC has historically had very good success scaling up to "real-world" problems to date.
Table 1 doesn't really say anything. Of course, the diagonals are higher than the off diagonals because these are dot products. True, they are dot products of averages over different inputs (which is why they are less than 1), but still. Also, what Table 1 really seems to say is that the off-diagonals really aren't all that different than the diagonals, and that especially the differences between same and different digits is not very different, suggesting that what is learned is pretty fragile and likely won't generalize to harder problems. I like the idea of using dynamical systems theory to attempt to explain what is going on, but I wonder if it is not being used a bit simplistically or naively.
Why were the five transform classes chosen? It seems like the "transforms" a (same) and e (different) are qualitatively different than transforms b-d (rotated, scaled, blurred). This seems like it should talked about.
"Thus, we infer, that the reservoir is in fact, simply training these attractors as opposed to training the entire reservoir space." What does this mean? The reservoir isn't trained at all in ESNs (which is also stated explicitly for the model presented here)…
For 3.3, why did were those three classes chosen? Was this experiment tried with other subsets of three classes? Why are results reported on only the one combination of rotated/blurred vs. rotated? Were others tried? If so, what were the results? If not, why? How does the network know when to take more than the highest output (so it can say that two transforms have been applied)? In the case of combination, counting either transform as the correct output kind of seems like cheating a bit—it over states how well the model is doing. Also, does the order in which the transforms are applied affect their relative representative strength in the reservoir?
The comparison with SNNs is kind of interesting, but I'm not sure that I'm (yet) convinced, as there is little detail on how the experiment was performed and what was done (or not) to try to get the SNN to generalize. My suspicion is that with the proper approach, an SNN or similar non-dynamical system could generalize well on these tasks. The need for a dynamical system could be argued to make sense for the camera task, perhaps, as video frames naturally form a time series; however, as already mentioned, for the MNIST data, this is not the case, and the fact that the SNN does not generalize here seems likely due to their under utilization rather than due to an inherent lack of capability.
I don't believe that there is sufficient support for this statement in the conclusion, "[ML/deep networks] do not work as well for generalization of learning. In generalized learning, RCNs outperform them, due to their ability to function as a dynamical system with ‘memory’." First of all, ML is all about generalization, and there are lots and lots and lots of results showing that many ML systems generalize very well on a wide variety of problems, well beyond just classification, in fact. And, I don't think the the paper has convincingly shown that a dynamical system 'memory' is doing something especially useful, given that the main task studied, that of character recognition (or classification of transformation or even transformation itself), does not require such a temporal ability. |
iclr_2018_rkr1UDeC- | Published as a conference paper at ICLR 2018 LARGE SCALE DISTRIBUTED NEURAL NETWORK TRAINING THROUGH ONLINE DISTILLATION
Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased testtime cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforward to use as it does not require a complicated multi-stage setup or many new hyperparameters. Our first claim is that online distillation enables us to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still speed up training even after we have already reached the point at which additional parallelism provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible. We support our claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet, and the largest to-date dataset used for neural language modeling, containing 6 × 10 11 tokens and based on the Common Crawl repository of web data. | This paper provides a very original & promising method to scale distributed training beyond the current limits of mini-batch stochastic gradient descent. As authors point out, scaling distributed stochastic gradient descent to more workers typically requires larger batch sizes in order to fully utilize computational resource, and increasing the batch size has a diminishing return. This is clearly a very important problem, as it is a major blocker for current machine learning models to scale beyond the size of models and datasets we currently use. Authors propose to use distillation as a mechanism of communication between workers, which is attractive because prediction scores are more compact than model parameters, model-agnostic, and can be considered to be more robust to out-of-sync differences. This is a simple and sensible idea, and empirical experiments convincingly demonstrate the advantage of the method in large scale distributed training.
I would encourage authors to experiment in broader settings, in order to demonstrate that the general applicability of the proposed method, and also to help readers better understand its limitations. Authors only provide a single positive data point; that co-distillation was useful in scaling up from 128 GPUs to 258 GPUs, for the particular language modeling problem (commoncrawl) which others have not previously studied. In order for other researchers who work on different problems and different system infrastructure to judge whether this method will be useful for them, however, they need to understand better when codistillation succeeds and when it fails. It will be more useful to provide experiments with smaller and (if possible) larger number of GPUs (16, 32, 64, and 512?, 1024?), so that we can more clearly understand how useful this method is under the regime mini-batch stochastic gradient continues to scale. Also, more diversity of models would also help understanding robustness of this method to the model. Why not consider ImageNet? Goyal et al reports that it took an hour for them to train ResNet on ImageNet with 256 GPUs, and authors may demonstrate it can be trained faster.
Furthermore, authors briefly mention that staleness of parameters up to tens of thousands of updates did not have any adverse effect, but it would good to know how the learning curve behaves as a function of this delay. Knowing how much delay we can tolerate will motivate us to design different methods of communication between teacher and student models. |
iclr_2018_HyBbjW-RW | Driven by the need for parallelizable hyperparameter optimization methods, this paper studies open loop search methods in the sense that the sequence is predetermined and can be generated before a single configuration is evaluated. Examples include grid search, uniform random search, low discrepancy sequences, and other sampling distributions. In particular, we propose the use of k-determinantal point processes in hyperparameter optimization via random search. Compared to conventional uniform random search where hyperparameter settings are sampled independently, a k-DPP promotes diversity. We describe an approach that transforms hyperparameter search spaces for efficient use with a k-DPP. In addition, we introduce a novel Metropolis-Hastings algorithm which can sample from kDPPs defined over spaces with a mixture of discrete and continuous dimensions. Our experiments show significant benefits over uniform random search in realistic scenarios with a limited budget for training supervised learners, whether in serial or parallel. | In this paper, the authors consider non-sequential (in the sense that many hyperparameter evaluations are done simultaneously) and uninformed (in the sense that the hyperparameter evaluations are chosen independent of the validation errors observed) hyperparameter search using determinantal point processes (DPPs). DPPs are probability distributions over subsets of a ground set with the property that subsets with more "diverse" elements have higher probability. Diverse here is defined using some similarity metric, often a kernel. Under the RBF kernel, the more diverse a set of vectors is, the closer the kernel matrix becomes to the identity matrix, and thus the larger the determinant (and therefore probability under the DPP) grows. The authors propose to do hyperparameter tuning by sampling a set of hyperparameter evaluations from a DPP with the RBF kernel.
Overall, I have a couple of concerns about novelty as well as the experimental evaluation for the authors to address. As the authors rightly point out, sampling hyperparameter values from a DPP is equivalent to sampling proportional to the posterior uncertainy of a Gaussian process, effectively leading to a pure exploration algorithm. As the authors additionally point out, such methods have been considered before, including methods that directly propose to batch Bayesian optimization by choosing a single exploitative point and sampling the remainder of the batch from a DPP (e.g., [Kathuria et al., 2016]). The default procedure for parallel BayesOpt used by SMAC [R2] is (I believe) also to choose a purely explorative batch. I am unconvinced by the argument that "while this can lead to easy parallelization within one iteration of Bayesian optimization, the overall algorithms are still sequential." These methods can typically be expanded to arbitrarily large batches and fully utilize all parallel hardware. Most implementations of batch Bayesian optimization in practice (SMAC and Spearmint as examples) will even start new jobs immediately as jobs finish -- these implementations do not wait for the entire batch to finish typically.
Additionally, while there has been some work extending GP-based BayesOpt to tree-based parameters [R3], at a minimum SMAC in particular is known well suited to the tree-based parameter search considered by the authors. I am not sure that I agree that TPE is state-of-the-art on these problems: SMAC typically does much better in my experience.
Ultimately, my concern is that--considering these tools are open source and relatively stable software at this point--if DPP-only based hyperparameter optimization is truly better than the parallelization approach of SMAC, it should be straightforward enough to download SMAC and demonstrate this. If the argument that BayesOpt is somehow "still sequential" is true, then k-DPP-RBF should outperform these tools in terms of wall clock time to perform optimization, correct?
[R1] Kathuria, Tarun and Deshpande, Amit and Kohli, Pushmeet. Batched Gaussian Process Bandit Optimization via Determinantal Point Processes, 2016.
[R2] Several papers, see: http://www.cs.ubc.ca/labs/beta/Projects/SMAC/
[R3] Jenatton, R., Archambeau, C., González, J. and Seeger, M., 2017, July. Bayesian Optimization with Tree-structured Dependencies. In International Conference on Machine Learning (pp. 1655-1664). |
iclr_2018_BJ4prNx0W | Learning programs with neural networks is a challenging task, addressed by a long line of existing work. It is difficult to learn neural networks which will generalize to problem instances that are much larger than those used during training. Furthermore, even when the learned neural program empirically works on all test inputs, we cannot verify that it will work on every possible input. Recent work has shown that it is possible to address these issues by using recursion in the Neural Programmer-Interpreter, but this technique requires a verification set which is difficult to construct without knowledge of the internals of the oracle used to generate training data. In this work, we show how to automatically build such a verification set, which can also be directly used for training. By interactively querying an oracle, we can construct this set with minimal additional knowledge about the oracle. We empirically demonstrate that our method allows automated learning and verification of a recursive NPI program with provably perfect generalization. | Previous work by Cai et al. (2017) shows how to use Neural Programmer-Interpreter (NPI) framework to prove correctness of a learned neural network program by introducing recursion. It requires generation of a diverse training set consisting of execution traces which describe in detail the role of each function in solving a given input problem. Moreover, the traces need to be recursive: each function only takes a finite, bounded number of actions. In this paper, the authors show how training set can be generated automatically satisfying the conditions of Cai et al.'s paper. They iteratively explore all
possible behaviors of the oracle in a breadth-first manner, and the bounded nature of the recursive
oracle ensures that the procedure converges. As a running example, they show how this can be be done for bubblesort. The training set generated in this process may have a lot of duplicates, and the authors show how these duplicates can possibly be removed. It indeeds shows a dramatic reduction in the number of training samples for the three experiments that have been shown in the paper.
I am not an expert in this area, so it is difficult for me to judge the technical merit of the work. My feeling from reading the paper is that it is rather incremental over Cai et al. I am impressed by the results of the three experiments that have been shown here, specifically, the reduction in the training samples once they have been generated is significant. But these are also the same set of experiments performed by Cai et al.
Given the original number of traces generated is huge, I do not understand, why this method is at all practical. This also explains why the authors have just tested the performance on extremely small sized data. It will not scale. So, I am hesitant accepting the paper. I would have been more enthusiastic if the authors had proposed a way to combine the training space exploration as well as removing redundant traces together to make the whole process more scalable and done experiments on reasonably sized data. |
iclr_2018_rkONG0xAW | This paper presents a storage-efficient learning model titled Recursive Binary Neural Networks for embedded and mobile devices having a limited amount of on-chip data storage such as hundreds of kilo-Bytes. The main idea of the proposed model is to recursively recycle data storage of weights (parameters) during training. This enables a device with a given storage constraint to train and instantiate a neural network classifier with a larger number of weights on a chip, achieving better classification accuracy. Such efficient use of on-chip storage reduces off-chip storage accesses, improving energy-efficiency and speed of training. We verified the proposed training model with deep and convolutional neural network classifiers on the MNIST and voice activity detection benchmarks. For the deep neural network, our model achieves data storage requirement of as low as 2 bits/weight, whereas the conventional binary neural network learning models require data storage of 8 to 32 bits/weight. With the same amount of data storage, our model can train a bigger network having more weights, achieving 1% less test error than the conventional binary neural network learning model. To achieve the similar classification error, the conventional binary neural network model requires 4× more data storage for weights than our proposed model. For the convolution neural network classifier, the proposed model achieves 2.4% less test error for the same on-chip storage or 6× storage savings to achieve the similar accuracy. | There could be an interesting idea here, but the limitations and applicability of the proposed approach are not clear yet. More analysis should be done to clarify its potential. Besides, the paper seriously needs to be reworked. The text in general, but also the notation, should be improved.
In my opinion, the authors should explain how to apply their algorithm to more general network architectures, and test it, in particular to convnets. An experiment on a modern dataset beyond MNIST would also be a welcome addition.
Some comments:
- The method is present as a fully-connected network training procedure. But the resulting network is not really fully-connected, but modular. This is clear in Fig. 1 and in the explanation in Sect. 3.1. The newly added hidden neurons at every iteration do not project to the previous pool of hidden neurons. It should be stressed that the networks end up with this non-conventional “tiled” architecture. Are there studies where the capacity of such networks is investigated, when all the weights are trained concurrently.
- It wasn’t clear to me whether the memory reallocation could be easily implemented in hardware. A few references or remarks on this issue would be welcome.
- The work “Efficient supervised learning in networks with binary synapses” by Baldassi et al. (PNAS 2007) should be cited. Although usually ignored by the deep learning community, it actually was a pioneering study on the use of low resolution weights during inference while allowing for auxiliary variables during learning.
- Coming back my main point above, I didn’t really get the discussion on Sect. 5.3. Why didn’t the authors test their algorithm on a convnet? Are there any obstacles in doing so? It seems quite important to understand this point, as the paper appeals to technical applications and convolution seems hard to sidestep currently.
- Fig. 3: xx-axis: define storage efficiency and storage requirement.
- Fig. 4: What’s an RSBL? Acronyms should be defined.
- Overall, language and notation should really be refined. I had a hard time reading Algorithm 1, as the notation is not even defined anywhere. And this problem extends throughout the paper.
For example, just looking at Sect. 4.1, “training and testing data x is normalized…”, if x is not properly defined, it’s best to omit it; “… 2-dimentonal…”, at least major typos should be scanned and corrected. |
iclr_2018_SJCPLLpaW | DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks. DeePa optimizes parallelism at the granularity of each individual layer in the network. We present an elimination-based algorithm that finds an optimal parallelism configuration for every layer. Our evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×. | The paper proposes an approach that offers speedup on common convolutional neural networks. It presents the approach well and shows results comparing with other popular frameworks used in the field.
Originality
- The automation of parallelism across the different dimensions in each of the layers appears somewhat new. Although parallelism across each of the individual dimensions has been explored (batch parallel is most common and best supported, height and width is discussed at least in the DistBelief paper), automatically exploring this to find the most efficient approach is new. The splitting across channels seems not to have been covered in a paper before.
Significance
- Paper shows a significant speedup over existing approaches on a single machine (16 GPUs). It is unclear how well this would translate across machines or to more devices, and also on newer devices - the experiments were all done on 16 K80s (3 generations old GPUs). While the approach is interesting, its impact also depends on the speedup on the common hardware used today.
Pros:
- Providing better parallelism opportunities for convolutional neural networks
- Simple approach to finding optimal global configurations that seems to work well
- Positive results with significant speedups across 3 different networks
Cons:
- Unclear if speedups hold on newer devices
- Useful to see how this scales across more than 1 machine
- Claim on overlapping computation with data transfer seems incorrect. I am pretty sure TensorFlow and possibly PyTorch supports this.
Questions:
- How long does finding the optimal global configuration take for each model? |
iclr_2018_Sy3XxCx0Z | Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data? If not, how can NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we aim to answer these questions by enriching the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models with external knowledge further improve the state of the art on the Stanford Natural Language Inference (SNLI) dataset. | Update:
The response addressed all my major concerns, and I think the paper is sound. (I'm updating my confidence to a 5.) So, the paper makes an empirically *very* small step in an interesting line of language understanding research. This paper should be published in some form, but my low-ish score is due simply to my worry that ICLR is not the venue. I think this would be a clear 'accept' as a *ACL short paper, and would probably be viable as a *ACL long paper, but it will definitely have less impact on the overall field of representation learning than will the typical ICLR paper, so I can recommend it only with reservations.
--
This paper presents a method to use external lexical knowledge (word–word relations from WordNet) as an auxiliary input when solving the problem of textual entailment (aka NLI). The idea of accessing outside commonsense knowledge within an end-to-end trained model is one that I expect to be increasingly important in work on language understanding. This paper does not make that much progress on the problem in general—the methods here are quite specific to words and to NLI—and the proposed methods yields only yield large empirical gains in a reduced-data setting, but the paper serves as a well-executed proof of concept. In short, the paper is likely to be of low technical impact, but it's interesting and thorough enough that I lean slightly toward acceptance.
My only concern is on fair comparison: Numbers from this model are compared with numbers from the published ESIM model in several places (Table 3, Figure 2, etc.) as a way to provide evidence for the paper's core claim (that the added knowledge in the proposed model helps). This only constitutes clear evidence if the proposed model is identical to ESIM in all of its unimportant details—word representations, hyperparameter tuning methods, etc. Can the authors comment on this?
For what it's worth, the existence of another paper submission on roughly the same topic with roughly the same results (https://openreview.net/pdf?id=B1twdMCab) makes more confident that the main results in this paper are sound, since they've already been replicated, at least to a coarse approximation.
Minor points:
For TransE, what does this mean:"However, these kind of approaches usually need to train a knowledge-graph embedding beforehand."
You should say more about why you chose the constant 8 in Table 1 (both why you chose to hard code a value, and why that value).
There's a mysterious box above the text 'Figure 1'. Possibly a figure rendering error?
The LSTM equations are quite widely known. I'd encourage you to cite a relevant source and remove them.
Say more about why you choose equation (9). This notably treats all five relation types equally, which seems like a somewhat extreme simplifying assumption.
Equation (15) is confusing. Is a^m a matrix, since it doesn't have an index on it?
What is "early stopping with patience of 7"? Is that meant to mean 7 epochs?
The opening paragraph of 5.1 seems entirely irrelevant, as do the associated results in the results table. I suspect that this might be an opportunity for a gratuitous self-citation.
There are plenty of typos: "... to make it replicatibility purposes."; "Specifically, we use WordNet to measure the semantic relatedness of the *word* in a pair"; etc. |
iclr_2018_HyWrIgW0W | Published as a conference paper at ICLR 2018 STOCHASTIC GRADIENT DESCENT PERFORMS VARIATIONAL INFERENCE, CONVERGES TO LIMIT CYCLES FOR DEEP NETWORKS
Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in general. So SGD does perform variational inference, but for a different loss than the one used to compute the gradients. Even more surprisingly, SGD does not even converge in the classical sense: we show that the most likely trajectories of SGD for deep networks do not behave like Brownian motion around critical points. Instead, they resemble closed loops with deterministic components. We prove that such "out-of-equilibrium" behavior is a consequence of highly non-isotropic gradient noise in SGD; the covariance matrix of mini-batch gradients for deep networks has a rank as small as 1% of its dimension. We provide extensive empirical validation of these claims, proven in the appendix. | The paper takes a closer look at the analysis of SGD as variational inference, first proposed by Duvenaud et al. 2016
and Mandt et al. 2016. In particular, the authors point out that in general, SGD behaves quite differently from Langevin diffusion due to the multivariate nature of the Gaussian noise. As the authors show based on the Fokker-Planck equation of the underlying stochastic process, there exists a conservative current (a gradient of an underlying potential) and a non-conservative current (which might induce stationary persistent currents at long times). The non-conservative part leads to the fact that the dynamics of SGD may show oscillations, and these oscillations may even prevent the algorithm from converging to the 'right' local optima. The theoretical analysis is carried-out very nicely, and the theory is supported by experiments on two-dimensional toy examples, and Fourier-spectra of the iterates of SGD.
This is a nice paper which I would like to see accepted. In particular I appreciate that the authors stress the importance
of 'non-equilibrium physics' for understanding the SGD process. Also, the presentation is quite clear and the paper well written.
There are a few minor points which I would like to ask the authors to address:
1. Why cite Kingma and Welling as a source for variational inference in section 3.1? VI is a much older field, and Kingma and Welling proposed a very special form of VI, namely amortized VI with inference networks. A better citation would be Jordan et al 1999.
2. I'm not sure how much to trust the Fourier-spectra. In particular, perhaps the deviations from Brownian motion could also be due to the discrete nature of SGD (i.e. that the continuous-time formalism is only an approximation of a discrete process). Could you elaborate on this?
3. Could you give the reader more details on how the uncertainty estimates on the Fourier transformations were obtained?
Thanks. |
iclr_2018_Hy1d-ebAb | Graphs are fundamental data structures required to model many important realworld data, from knowledge graphs, physical and social interactions to molecules and proteins. In this paper, we study the problem of learning generative models of graphs from a dataset of graphs of interest. After learning, these models can be used to generate samples with similar properties as the ones in the dataset. Such models can be useful in a lot of applications, e.g. drug discovery and knowledge graph construction. The task of learning generative models of graphs, however, has its unique challenges. In particular, how to handle symmetries in graphs and ordering of its elements during the generation process are important issues. We propose a generic graph neural net based model that is capable of generating any arbitrary graph. We study its performance on a few graph generation tasks compared to baselines that exploit domain knowledge. We discuss potential issues and open problems for such generative models going forward. | The paper introduces a generative model for graphs. The three main decision functions in the sequential process are computed with neural nets. The neural nets also compute node embeddings and graph embeddings and the embeddings of the current graph are used to compute the decisions at time step T. The paper is well written but, in my opinion, a description of the learning framework should be given in the paper. Also, a summary of the hyperparameters used in the proposed system should be given. It is claimed that all possible types of graphs can be learned which seems rather optimistic. For instance, when learning trees, the system is tweaked for generating trees. Also, it is not clear whether models for large graphs can be learned. The paper contain many interesting contributions but, in my opinion, the model is too general and the focus should be given on some retricted classes of graphs. Therefore, I am not convinced that the paper is ready for publication at ICLR'18.
* Introduction. I am not convinced by the discussion on graph grammars in the second paragraph. It is known that there does not exist a definition of regular grammars in graph (see Courcelle and Engelfriet, graph structure and monadic second-order logic ...). Moreover, many problems are known to be undecidable. For weighted automata, the reference Droste and Gastin considers weighted word automata and weighted logic for words. Therefore I does not seem pertinent here. A more complete reference is "handbook of weighted automata" by Droste. Also, many decision problems for wighted automata are known to be undecidable. I am not sure that the paragraph is useful for the paper. A discussion on learning as in footnote 1 shoud me more interesting.
* Related work. I am not expert in the field but I think that there are recent references which could be cited for probablistic models of graphs.
* Section 3.1. Constraints can be introduced to impose structural properties of the generated graphs. This leads to the question of cheating in the learning process.
* Section 3.2. The functions f_m and g_m for defining graph embedding are left undefined. As the graph embedding is used in the generating process and for learning, the functions must be defined and their choice explained and justified.
* Section 3. As said before, a general description of the learning framework should be given. Also, it is not clear to me how the node and graph embeddings are initialized and how they evolve along the learning process. Therefore, it is not clear to me why the proposed updating framework for the embeddings allow to generate decision functions adapted to the graphs to be learned. Consequently, it is difficult to see the influence of T. Also, it should be said whether the node embeddings and graph embeddings for the output graph can be useful.
* Section 3. A summary of all the hyperparameters should be given.
* Section 4.1. The number of steps is not given. Do you present the same graph multiple times. Why T=2 and not 1 or 10 ?
* Section 4.2. From table 2, it seems that all permutations are used for training which is rather large for molecules of size 20. Do you use tweaks in the generation process.
* Section 4.3. The generation process is adapted for generating trees which seems to be cheating. Again the choice of T seems ad hoc and based on computational burden.
* Section 5 should contain a discussion on complexity issues because it is not clear how the model can learn large graphs.
* Section 5. The discussion on the difficulty of training shoud be emphasized and connected to the --missing-- description of the model architecture and its hyperparameters.
* acronyms should be expansed at their first use |
iclr_2018_Syg-YfWCW | Published as a conference paper at ICLR 2018 GO FOR A WALK AND ARRIVE AT THE ANSWER: REASONING OVER PATHS IN KNOWLEDGE BASES USING REINFORCEMENT LEARNING
Knowledge bases (KB), both automatically and manually constructed, are often incomplete -many valid facts can be inferred from the KB by synthesizing existing information. A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities. Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple. Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them. We propose a new algorithm, MINERVA, which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity. Since random walks are impractical in a setting with unknown destination and combinatorially many paths from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths. On a comprehensive evaluation on seven knowledge base datasets, we found MINERVA to be competitive with many current state-of-the-art methods. | The paper proposes a new approach (Minerva) to perform query answering on knowledge bases via reinforcement learning. The method is intended to answer queries of the form (e,r,?) on knowledge graphs consisting of dyadic relations. Minerva is evaluated on a number of different datasets such as WN18, NELL-995, and WikiMovies.
The paper proposes interesting ideas to attack a challenging problem, i.e., how to perform query answering on incomplete knowledge bases. While RL methods for KG completion have been proposed recently (e.g., DeepPath), Minerva improves over these approaches by not requiring the target entity. This property can be indeed be important to perform query answering efficiently. The proposed model seems technically reasonable and the paper is generally written well and good to understand. However, important parts of the paper seem currently unfinished and would benefit from a more detailed discussion and analysis.
Most importantly, I'm currently missing a better motivation and especially a more thorough evaluation on how Minerva improves over non-RL methods. For instance, the authors mention multi-hop methods such as (Neelakantan, 2015; Guu, 2015) in the introduction. Since these methods are closely related, it would be important to compare to them experimentally (unfortunately, DeepPath doesn't do this comparison either). For instance, eliminating the need to pre-compute paths might be irrelevant when it doesn't improve actual performance. Similarly, the paper mentions improved inference time, which indeed is a nice feature. However, I'm wondering, what is the training time and how does it compare to standard methods like ComplEx. Also, how robust is training using REINFORCE?
With regard to the experimental results: The improvements over DeepPath on NELL and on WikiMovies are indeed promising. I found the later results the most convincing, as the setting is closest to the actual task of query answering. However, what is worrying is that Minerva doesn't do well on WN18 and FB15k-237 (for which the results are, unfortunately, only reported in the appendix). On FB15k-237 (which is harder than WN18 and arguably more relevant for real-world scenarios since it is a subset of a real-world knowledge graph), it is actually outperformed by the relatively simple DistMult method. From these results, I find it hard to justify that "MINERVA obtains state-of-the-art results on seven KB datasets, significantly outperforming prior methods", as stated in the abstract.
Further comments:
- How are non-existing relations handled, i.e., queries (e,r,x) where there is no valid x? Does Minerva assume there is always a valid answer?
- Comparison to DeepPath: Did you evaluate Minerva with fixed embeddings? Since the experiments in DeepPath used fixed embeddings, it would be important to know how much of the improvements can be attributed to this difference.
- The experimental section covers quite a lot of different tasks and datasets (Countries, UMLS, Nations, NELL, WN18RR, Gridworld, WikiMovies) all with different combinations of methods. For instance, countries is evaluated against ComplEx,NeuralLP and NTP; NELL against DeepPath; WN18RR against ConvE, ComplEx, and DistMult; WikiMovies against MemoryNetworks, QA and NeuralLP. A more focused evaluation with a consistent set of methods could make the experiments more insightful. |
iclr_2018_rkgOLb-0W | Published as a conference paper at ICLR 2018 NEURAL LANGUAGE MODELING BY JOINTLY LEARNING SYNTAX AND LEXICON
We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks. | ** UPDATE ** upgraded my score to 7 based on the new version of the paper.
The main contribution of this paper is to introduce a new recurrent neural network for language modeling, which incorporates a tree structure More precisely, the model learns constituency trees (without any supervision), to capture syntactic information. This information is then used to define skip connections in the language model, to capture longer dependencies between words. The update of the hidden state does not depend only on the previous hidden state, but also on the hidden states corresponding to the following words: all the previous words belonging to the smallest subtree containing the current word, such that the current word is not the left-most one. The authors propose to parametrize trees using "syntactic distances" between adjacent words (a scalar value for each pair of adjacent words w_t, w_{t+1}). Given these distances, it is possible to obtain the constituents and the corresponding gating activations for the skip connections. These different operations can be relaxed to differentiable operations, so that stochastic gradient descent can be used to learn the parameters. The model is evaluated on three language modeling benchmarks: character level PTB, word level PTB and word level text8. The induced constituency trees are also evaluated, for sentence of length 10 or less (which is the standard setting for unsupervised parsing).
Overall, I really like the main idea of the paper. The use of "syntactic distances" to parametrize the trees is clever, as they can easily be computed using only partial information up to time t. From these distances, it is also relatively straightforward to obtain which constituents (or subtrees) a word belongs to (and thus, the corresponding gating activations). Moreover, the operations can easily be relaxed to obtain a differentiable model, which can easily be trained using stochastic gradient descent.
The results reported on the language modeling experiments are strong. One minor comment here is that it would be nice to have an ablation analysis, as it is possible to obtain similarly strong results with simpler models (such as plain LSTM).
My main concern regarding the paper is that it is a bit hard to understand. In particular in section 4, the authors alternates between discrete and relaxed values: end of section 4.1, it is implied that alpha are in [0, 1], but in equation 6, alpha are in {0, 1}, then relaxed in equation 9 to [0, 1] again. I am also wondering whether it would make more sense to start by introducing the syntactic distances, then the alphas and finally the gates? I also found the section 5 to be quite confusing. While I get the general idea, I am not sure what is the relation between hidden states h and m (section 5.1). Is there a mixup between h defined in equation 10 and h from section 5.1? I am aware that it is not straightforward to describe the proposed method, but believe it would be a much stronger paper if written more clearly.
To conclude, I really like the method proposed in this paper, and believe that the experimental results are quite strong.
My main concern regarding the paper is its clarity: I will gladly increase my score if the authors can improve the writing. |
iclr_2018_HyRnez-RW | MULTI-MENTION LEARNING FOR READING COMPREHENSION WITH NEURAL CASCADES
Reading comprehension is a challenging task, especially when executed across longer or across multiple evidence documents, where the answer is likely to reoccur. Existing neural architectures typically do not scale to the entire evidence, and hence, resort to selecting a single passage in the document (either via truncation or other means), and carefully searching for the answer within that passage. However, in some cases, this strategy can be suboptimal, since by focusing on a specific passage, it becomes difficult to leverage multiple mentions of the same answer throughout the document. In this work, we take a different approach by constructing lightweight models that are combined in a cascade to find the answer. Each submodel consists only of feed-forward networks equipped with an attention mechanism, making it trivially parallelizable. We show that our approach can scale to approximately an order of magnitude larger evidence documents and can aggregate information at the representation level from multiple mentions of each answer candidate across the document. Empirically, our approach achieves stateof-the-art performance on both the Wikipedia and web domains of the TriviaQA dataset, outperforming more complex, recurrent architectures. | The authors present a scalable model for questioning answering that is able to train on long documents. On the TriviaQA dataset, the proposed model achieves state of the art results on both domains (wikipedia and web). The formulation of the model is straight-forward, however I am skeptical about whether the results prove the premise of the paper (e.g. multi-mention reasoning is necessary). Furthermore, I am slightly unconvinced about the authors' claim of efficiency. Nevertheless, I think this work is important given its performance on the task.
1. Why is this model successful? Multi-mention reasoning or more document context?
I am not convinced of the necessity of multi-mention reasoning, which the authors use as motivation, as shown in the examples in the paper. For example, in Figure 1, the answer is solely obtained using the second last passage. The other mentions provide signal, but does not provide conclusive evidence. Perhaps I am mistaken, but it seems to me that the proposed model cannot seem to handle negation, can the authors confirm/deny this? I am also skeptical about the computation efficiency of a model that scores all spans in a document (which is O(N^2), where N is the document length). Can you show some analysis of your model results that confirm/deny this hypothesis?
2. Why is the computational complexity not a function of the number of spans?
It seems like the derivations presents several equations that score a given span. Perhaps I am mistaken, but there seems to be n^2 spans in the document that one has to score. Shouldn't the computational complexity then be at least O(n^2), which makes it actually much slower than, say, SQuAD models that do greedy decoding O(2n + nm)?
Some minor notes
- 3.3.1 seems like an attention computation in which the attention context over the question and span is computed using the question. Explicitly mentioning this may help the reading grasp the formulation.
- Same for 3.4, which seems like the biattention (Seo 2017) or coattention (Xiong 2017) from previous squad work.
- The sentence "We define ... to be the embeddings of the l words of the sentence that contains s." is not very clear. Do you mean that the sentence contains l words? It could be interpreted that the span has l words.
- There is a typo in your 3.7 "level 1 complexity": there is an extra O inside the big O notation. |
iclr_2018_HkMhoDITb | Recent theoretical and experimental results suggest the possibility of using current and near-future quantum hardware in challenging sampling tasks. In this paper, we introduce free-energy-based reinforcement learning (FERL) as an application of quantum hardware. We propose a method for processing a quantum annealer's measured qubit spin configurations in approximating the free energy of a quantum Boltzmann machine (QBM). We then apply this method to perform reinforcement learning on the grid-world problem using the D-Wave 2000Q quantum annealer. The experimental results show that our technique is a promising method for harnessing the power of quantum sampling in reinforcement learning tasks. | There is no scientific consensus on whether quantum annealers such as the D-Wave 2000Q that use the transverse-field Ising models yield any gains over classical methods (c.f. https://arxiv.org/abs/1703.00622). However, it is an exciting research area and this paper is an interesting demonstration of the feasibility of using quantum annealers for reinforcement learning.
This paper builds on Crawford et al. (2016), an unpublished preprint, who develop a quantum Boltzmann machine reinforcement learning algorithm (QBM-RL). A QBM consists of adding a transverse field term to the RBM Hamiltonian (negative log likelihood), but the benefits of this for unsupervised tasks are unclear (c.f. https://arxiv.org/abs/1601.02036, another unpublished preprint). QBM-RL consists of using a QBM to model the state-action variables: it is an undirected graphical model whose visible nodes are clamped to observed state-action pairs. The hidden nodes model dependencies between states and actions, and the weights of the model are updated to maximize the free energy or Q function (value of the state-action pair).
The authors extend QBM-RL to work with quantum annealers such as the D-Wave 2000Q, which has a specific bipartite graph structure and requires special consideration because it can only yield samples of hidden variables in a fixed basis. To overcome this, the authors develop a Suzuki-Trotter expansion and call it 'replica stacking', where a classical Hamiltonian in one dimension higher is used to approximate the quantum Hamiltonian. This enables the use of quantum annealers. The authors compare their method to standard baselines in a grid world environment.
Overall, I do not want to criticize the work. It is an interesting proof of concept. But given the high price of quantum annealers, limited applicability of the technique, and unclear benefits of the authors' method, I do not think it is relevant to this specific conference. It may be better suited to a workshop specific to quantum machine learning methods.
=======================================
+ please add an algorithm box for your method. It deviates significantly from QBM-RL. For example, something like: (1) init weights of boltzmann machine randomly (2) sample c_eff ~ C from the pool of configurations sampled from the transverse-field Ising model using a quantum annealer with chimera graph (3) using the samples, calculate effective classical hamiltonian used to approximate the quantum system (4) use the weight update rules derived from Bellman equations (spell out the rules).
+ moving the details of sampling into the appendix would help; they are not important for understanding the main ingredients of your method
There are so many moving parts in your system, and someone without a physics background will struggle to understand it. Clarifying the algorithm in terms familiar to machine learning researchers will go a long way toward helping people understand your method.
+ the benefits of your method is unclear - it looks like the method works, but doesn't outperform the others. this is fine, but it is better to be straightforward about this and bill it as a 'proof of concept'
+ perhaps consider rebranding the paper as something like 'RL using replica stacking for sampling from quantum boltzmann machines with quantum annealers'. Elucidating why replica stacking is a crucial contribution of your work would be helpful, and could be of broad interest in the machine learning community. Right now it is too dense to be useful for the average person without a physics background: what difficulties are intrinsic to a quantum Hamiltonian? What is the intuition behind the Suzuki-Trotter decomposition you develop? What is the 'quantum' Boltzmann machine in machine learning terms (hidden-hidden connections in an undirected graphical model!)? What is replica-stacking in graphical model terms (this would be a great ML contribution in its own right!)? Really spelling these things out in detail (or in the appendix) would help
==========================================
1) eq 14 is malformed
2) references are not well-formatted
3) need factor of 1/2 to avoid double counting in sums over nearest neighbors (please be precise) |
iclr_2018_rk8wKk-R- | This paper revisits the problem of sequence modeling using convolutional architectures. Although both convolutional and recurrent architectures have a long history in sequence prediction, the current "default" mindset in much of the deep learning community is that generic sequence modeling is best handled using recurrent networks. The goal of this paper is to question this assumption. Specifically, we consider a simple generic temporal convolution network (TCN), which adopts features from modern ConvNet architectures such as a dilations and residual connections. We show that on a variety of sequence modeling tasks, including many frequently used as benchmarks for evaluating recurrent networks, the TCN outperforms baseline RNN methods (LSTMs, GRUs, and vanilla RNNs) and sometimes even highly specialized approaches. We further show that the potential "infinite memory" advantage that RNNs have over TCNs is largely absent in practice: TCNs indeed exhibit longer effective history sizes than their recurrent counterparts. As a whole, we argue that it may be time to (re)consider ConvNets as the default "go to" architecture for sequence modeling. | The authors claim that convolutional networks should be considered as possible replacements of recurrent neural networks as the default choice for solving sequential modelling problems. The paper describes an architecture similar to wavenet with residual connections. Empirical results are presented on a large number of tasks where the convolutional network often outperforms modern recurrent baselines or reaches similar performance.
The biggest strength of the paper is the large number of tasks on which the models are evaluated. The experiments seem sound and the information in both the paper and the appendix seem to allow for replication. That said, I don’t think that all the tasks are very relevant for comparing convolutional and recurrent architectures. While the time windows that RNNs can deal with are infinite in principle, it is common knowledge that the effective length of the dependencies RNNs can model is quite limited in practise. Many of the artificial task like the adding problem and sequential MNIST have been designed to highlight this weakness of RNNs. I don’t find it very surprising that these tasks are easy to solve with a feedforward architecture with a large enough context window. The more impressive results are in my opinion those on the language modelling tasks where one would indeed expect RNNs to be more suitable for capturing dependencies that require stack-like memory functionality.
While the related work is quite comprehensive, it downplays the popularity of convolutional architectures throughout history a bit. Especially in speech recognition, RNNs have only recently started to gain popularity while deep feedforward networks applied to overlapping time windows (i.e., 1D convolutions) have been the state-of-the-art for years. Of course the recent successes of dilated convolutions are likely to change the landscape in this application domain yet again.
The paper is well-structured and written. If anything, it is perhaps a little bit wordy at times but I prefer that over obscurity due to brevity.
The ideas in the paper are not novel and neither do the authors claim that they are. Unfortunately, I also think that the impact of the work is also somewhat limited due to the enormous success of the wavenet architecture. I do think that the results on the real-world tasks are valuable and worthy of publication. However, I feel that the authors exaggerate the extent to which researchers in this field still consider RNNs superior models for sequences.
+ Many experiments and tasks.
+ Well-written and clear.
+ Good results
- Somewhat exaggerated claims about the extent to which RNNs are still being considered more suitable sequence models
than dilated convolutions. Especially in light of the success of Wavenet.
- Not much novelty/originality. |
iclr_2018_Bk_fs6gA- | This paper introduces a framework for solving combinatorial optimization problems by learning from input-output examples of optimization problems. We introduce a new memory augmented neural model in which the memory is not resettable (i.e the information stored in the memory after processing an input example is kept for the next seen examples). We used deep reinforcement learning to train a memory controller agent to store useful memories. Our model was able to outperform hand-crafted solver on Binary Linear Programming (Binary LP). The proposed model is tested on different Binary LP instances with large number of variables (up to 1000 variables) and constrains (up to 700 constrains). | # Summary
This paper proposes a neural network framework for solving binary linear programs (Binary LP). The idea is to present a sequence of input-output examples to the network and train the network to remember input-output examples to solve a new example (binary LP). In order to store such information, the paper proposes an external memory with non-differentiable reading/writing operations. This network is trained through supervised learning for the output and reinforcement learning for discrete operations. The results show that the proposed network outperforms the baseline (handcrafted) solver and the seq-to-seq network baseline.
[Pros]
- The idea of approximating a binary linear program solver using neural network is new.
[Cons]
- The paper is not clearly written (e.g., problem statement, notations, architecture description). So, it is hard to understand the core idea of this paper.
- The proposed method and problem setting are not well-justified.
- The results are not very convincing.
# Novelty and Significance
- The problem considered in this paper is new, but it is unclear why the problem should be formulated in such a way. To my understanding, the network is given a set of input (problem) and output (solution) pairs and should predict the solution given a new problem. I do not see why this should be formulated as a "sequential" decision problem. Instead, we can just give access to all input/output examples (in a non-sequential way) and allow the network to predict the solution given the new input like Q&A tasks. This does not require any "memory" because all necessary information is available to the network.
- The proposed method seems to require a set of input/output examples even during evaluation (if my understanding is correct), which has limited practical applications.
# Quality
- The proposed reward function for training the memory controller sounds a bit arbitrary. The entire problem is a supervised learning problem, and the memory controller is just a non-differentiable decision within the neural network. In this case, the reward function is usually defined as the sum of log-likelihood of the future predictions (see [Kelvin Xu et al.] for training hard-attention) because this matches the supervised learning objective. It would be good to justify (empirically) the proposed reward function.
- The results are not fully-convincing. If my understanding is correct, the LTMN is trained to predict the baseline solver's output. But, the LTMN significantly outperforms the baseline solver even in the training set. Can you explain why this is possible?
# Clarity
- The problem statement and model description are not described well.
1) Is the network given a sequence of program/solution input? If yes, is it given during evaluation as well?
2) Many notations are not formally defined. What is the output (o_t) of the network? Is it the optimal solution (x_t)?
3) There is no mathematical definition of memory addressing mechanism used in this paper.
- The overall objective function is missing.
[Reference]
- Kelvin Xu et al., Show, Attend and Tell: Neural Image Caption Generation with Visual Attention |
iclr_2018_rkmu5b0a- | MGAN: TRAINING GENERATIVE ADVERSARIAL NETS WITH MULTIPLE GENERATORS
We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators' distributions and the empirical data distribution is minimal, whilst the JSD among generators' distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators. | The present manuscript attempts to address the problem of mode collapse in GANs using a constrained mixture distribution for the generator, and an auxiliary classifier which predicts the source mixture component, plus a loss term which encourages diversity amongst components.
All told the proposed method is quite incremental, as mixture GANs/multi-generators have been done before. The Inception scores are good but it's widely known now that Inception scores are a deeply flawed measure, and presenting it as the only quantitative measure in a manuscript which makes strong claims about mode collapse unfortunately will not suffice. If the generator were to generate one template per class for which the Inception network's p(y|x) had low entropy, the Inception score would be quite high even though the model had only memorized one image per class. For claims surrounding mode collapse in particular, evaluation against a parameter count matched baseline using the AIS log likelihood estimation procedure in Wu et al (2017) would be the gold standard. Frechet Inception distance has also been proposed which at least has some favourable properties relative to Inception score.
The mixing proportions are fixed to the uniform distribution, and therefore this method also makes the unrealistic assumption that modes are equiprobable and require an equal amount of modeling capacity. This seems quite dubious.
Finally, their own qualitative results indicate that they've simply moved the problem, with clear evidence of mode collapse in one of their mixture components in figure 5c, 4th row from the bottom. Indeed, this does nothing to address the problem of mode collapse in general, as there is nothing preventing individual mixture component GANs from collapsing.
Uncited prior work includes Generative Adversarial Parallelization of Im et al (2016). Also, if I'm not mistaken this is quite similar to an AC-GAN, where the classes are instead randomly assigned and the generator conditioning is done in a certain way; namely the first layer activations are the sum of K embeddings which are gated by the active mixture component. More discussion of this would be warranted.
Other notes:
- The introduction contains no discussion of the ill-posedness of the GAN game as it is played in practice.
- "As a result, the optimization order in 1 can be reversed" this does not accurately characterize the source of the issues, see, e.g. Goodfellow (2015) "On distinguishability criteria...".
- Section 3: the second last sentence of the third paragraph is vague and doesn't really say anything. Of course parameter sharing leverages common information. How does this help to train the model effectively?
- Section 3: Since JSD is defined between two distributions, it is not clear what JSD_pi(P_G1, P_G2, ...) refers to. The last line of the proof of theorem 2 leaps to calling this term a Jensen-Shannon divergence but it's not clear what the steps are; it looks like a regular KL divergence to me.
- Section 3: Also, is the classifier being trained to maximize this divergence or just the generator? I assume the latter.
- The proof of Theorem 3 makes unrealistic assumptions that we know the number of components a priori as well as their mixing proportions (pi).
- "... which further minimizes the objective value" -- it minimizes a term that you introduced which is constant with respect to your learnable parameters. This is not a selling point, and I'm not sure why you bothered mentioning it.
- There's no mention of the substitution of log (1 - D(x)) for -log(D(x)) and its effect on the interpretation as a Jensen-Shannon divergence (which I'm not sure was quite right in the first place)
- Section 4: does the DAE introduced in DFM really introduce that much of a computational burden?
- "Symmetric Kullback Liebler divergence" is not a well-known measure. The standard KL is asymmetric. Please define it.
- Figure 2 is illegible in grayscale.
- Improved-GAN score in Table 1 is misleading, as this was their no-label baseline. It's fine to include it but indicate it as such.
Update: many of my concerns were adequately addressed, however I still feel that calling this an avenue to "overcome mode collapse" is misleading. This seems aimed at improving coverage of the support of the data distribution; test log likelihood bounds via AIS (there are GAN baselines for MNIST in the Wu et al manuscript I mentioned) would have been more compelling quantitative evidence. I've raised my score to a 5. |
iclr_2018_HkbmWqxCZ | Variational autoencoders (VAE), (Kingma & Welling, 2013;Rezende et al., 2014), learn probabilistic latent variable models by optimizing a bound on the marginal likelihood of the observed data. Beyond providing a good density model a VAE model assigns to each data instance a latent code. In many applications, this latent code provides a useful high-level summary of the observation. However, the VAE may fail to learn a useful representation when the decoder family is very expressive. This is because maximum likelihood does not explicitly encourage useful representations and the latent variable is used only if it helps model the marginal distribution. This makes representation learning with VAEs unreliable. To address this issue, we propose a method for explicitly controlling the amount of information stored in the latent code. Our method can learn codes ranging from independent to nearly deterministic, while benefiting from decoder capacity. Thus, we decouple the choice of decoder capacity and the latent code dimensionality from the amount of information stored in the code. | Summary
This paper proposes a penalized VAE training objection for the purpose of increasing the information between the data x and latent code z. Ideally, optimization would consist of maximizing log p(x) - | I(x,z) - M |, where M is the user-specified target mutual information (MI) and I(x,z) is the model’s current MI value, but I(x,z) is intractable, necessitating the use of an auxiliary model r(z|x). Optimization, then, consists of alternating gradient ascent on the VAE parameters and r’s parameters. Experiments on simulations and text data are reported, showing that increasing M has the desired effect of allowing more deviation from the prior. Specifically, this is shown through text generation where the sampled sentences become more varied as M is decreased and better reconstructed as M is increased.
Evaluation
Pros: I like how this paper formalizes failure in representation learning as information loss in z---although the formulation is not particularly novel, i.e. [Zhao et al., ArXiv 2017]), and constructs an explicit, penalized objective to allow the user to specify the amount of information retained in z. In my opinion, the proposed objective is more transparent than the objectives proposed by related work. For instance, Chen et al.’s (2017) Lossy VAE, while aiming to solve essentially the same problem, does so by parameterizing the prior and using a windowed decoder, but there is no explicit control mechanism as far as I’m aware (except for how many parameters / window size). Perhaps the Beta-VAE’s [Higgins et al., ICLR 2017] KLD weight is similarly interpretable (as beta increases, less information is retained), but I like that M has the clear meaning of mutual information---whereas the beta in the Beta-VAE is just a Lagrangian. In terms of experiments, I like the first simulation; it’s a convincing sanity check. As for the second, I like the spirit of it, but I have some criticisms, as I’ll explain below.
Cons: The method requires training an auxiliary model r(z|x) to estimate I(x,z). While I don’t find the introduction of r(z|x) problematic, I do wish there was more discussion and analysis of how well the mutual information is being approximated during training, especially given some of the simplifying assumptions, such as r(z|x)=p(z|x). If the MI estimate is way off, that detracts from the method and makes an alternative like the Beta-VAE---which doesn’t require an auxiliary model---more appealing, since what makes the MAE superior---its principled targeting of MI---does not hold in practice.
As for the movie review experiment, I find the sentence samples a bit anecdotal. Was the seed sentence (“there are many great scenes of course”) randomly chosen or hand picked? Was this interpolation behavior typical? I ask these questions because I find the plot in Figure 3 all but meaningless. It’s good that we see reconstruction quality go up as M increases, as expected, but the baseline VAE model is a strawman. How does reconstruction percentage look for the Bowman et al. (2015) VAE? What about the Beta-VAE? Or Lossy VAE? Figure 3 would be okay if there were more experiments, but as it is the only quantitative result, more work should have gone in to it. For instance, a compelling result would be if we see one or more of the models above plateau in reconstruction percentage and the MAE surpass that plateau.
Conclusions
While I found aspects of this paper interesting, I recommend rejection primarily for two reasons. The first is that I would like to see how well the mutual information is being estimated during training. If the estimate is way off, this makes the method less appealing as what I like about it---the interpretable MI target---is not really a ‘target’, in practice, and rather, is a rough hyperparameter similar to the Beta-VAE’s beta term (which has the added benefit of no auxiliary model). The second reason is the paper’s weak experimental section. The only quantitative result is Figure 3, and while it shows reconstruction percentage increases with M, there is no way to contextualize the number as the only comparison model is a weak VAE, which gives ~ 0%. Questions I would like to see answered: How good is the MI estimate? How close is the converged VAE to the target? How does the model compare to the Bowman et al. VAE or the Beta-VAE? (It would be quite compelling to show similar or improved performance without the training tricks used by Bowman et al.) Can we somehow estimate the appropriate M directly from data (such as based on the entropy of training or validation set) in order to set the target rigorously?
1. S. Zhao, J. Song, and S. Ermon. “InfoVAE: Information Maximizing Variational Autoencoders.” ArXiv 2017.
2. X. Chen, D. Kingma, T. Salimans, Y. Duan, P. Dhariwal, J. Shulman, I. Sutskever, and P. Abbeel. “Variational Lossy Autoencoder.” ICLR 2017.
3. I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. “Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework.” ICLR 2017
4. S. Bowman, L. Vilnis, O. Vinyas, A. Dai, R. Jozefowicz, and S. Bengio. “Generating Sentences from a Continuous Space.” CoNLL 2016. |
iclr_2018_rk3pnae0b | Asking questions is an important ability for a chatbot. This paper focuses on question generation. Although there are existing works on question generation based on a piece of descriptive text, it remains to be a very challenging problem. In the paper, we propose a new question generation problem, which also requires the input of a target topic in addition to a piece of descriptive text. The key reason for proposing the new problem is that in practical applications, we found that useful questions need to be targeted toward some relevant topics. One almost never asks a random question in a conversation. Due to the fact that given a descriptive text, it is often possible to ask many types of questions, generating a question without knowing what it is about is of limited use. To solve the problem, we propose a novel neural network that is able to generate topic-specific questions. One major advantage of this model is that it can be trained directly using a question-answering corpus without requiring any additional annotations like annotating topics in the questions or answers. Experimental results show that our model outperforms the state-of-the-art baseline. | This paper presents a neural network-based approach to generate topic-specific questions with the motivation that topical questions are more meaningful in practical applications like real-world conversations. Experiments and evaluation have been conducted on the AQAD corpus to show the effectiveness of the approach.
Although the main contributions are clear, the paper contains numerous typos, grammatical errors, incomplete sentences, and a lot of discrepancies between text, notations, and figures making it ambiguous and difficult to follow.
Authors claim to generate topic-specific questions, however, the dataset choice, experiments, and examples show that the generated questions are essentially keyword/key phrase-based. This is also apparent in Section 4.1 where authors present some observation without any supporting proof or empirical evidence. Moreover, the example in Figure 1 shows a conversation, but, typically, in an ongoing multi-round conversation people do not tend to repeat the keywords or key phrases or named entities, and topic shifts might occur at any time.
Overall, a misconception about topic vs. keywords might have led the authors to claim that their work is the first to generate topic-specific questions whereas this has been studied before by Chali & Hasan (2015) in a non-neural setting. "Topic" in general has a broader meaning, I would suggest authors to see this to get an idea about what topic entails to in a conversational setting: https://developer.amazon.com/alexaprize/contest-rules . I think the proposed work is mostly related to: 1) "Towards Natural Question-Guided Search" by Kotov and Zhai (2010), and 2) "K2Q: Generating Natural Language Questions from Keywords with User Refinements" by Zheng et al. (2011), and other recent factoid question generation papers where questions are generated from a given fact (e.g. "Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus" by Serban et al. (2016)).
It is not clear how the question types are extracted from the given sentences. Please provide details. Which keywords are employed to accomplish this? Also, please explain the absence of the "why" type question.
Figure 3 and the associated descriptions are very hard to follow. Please draw the figure by matching it with the descriptions. Where are the bi-LSTMs in the figure? What are ac_t and em_t?
My major concern is with the experiments and evaluation. The dataset essentially contains questions about product reviews and does not match authors motivation/observation about real-world conversations. Moreover, evaluation has been conducted on a very small test set (just about 1% of the selected corpus), making the results unconvincing. More details are necessary about how exactly Kim's and Liu's models are used to get question types and topics.
Human evaluation results per category would have been more useful. How did you combine the scores of the human evaluation categories? Also, automatic evaluation and human evaluation results do not correlate well. Please explain. |
iclr_2018_HJIhGXWCZ | In this work we introduce a new framework for performing temporal predictions in the presence of uncertainty. It is based on a simple idea of disentangling components of the future state which are predictable from those which are inherently unpredictable, and encoding the unpredictable components into a low-dimensional latent variable which is fed into a forward model. Our method uses a supervised training objective which is fast and easy to train. We evaluate it in the context of video prediction on multiple datasets and show that it is able to consistently generate diverse predictions without the need for alternating minimization over a latent space or adversarial training. | Summary:
I like the general idea of learning "output stochastic" noise models in the paper, but the idea is not fully explored (in terms of reasonable variations and their comparative performance). I don't fully understand the rationale for the experiments: I cannot speak to the reasons for the GAN's failure (GANs are not easy to train and this seems to be reflected in the results); the newly proposed model seems to improve with samples simply because the evaluation seems to reward the best sample. I.e., with enough throws, I can always hit the bullseye with a dart even when blindfolded.
Comments:
The model proposes to learn a conditional stochastic deep model by training an output noise model on the input x_i and the residual y_i - g(x_i). The trained residual function can be used to predict a residual z_i for x_i. Then for out-of-sample prediction for x*, the paper appears to propose sampling a z uniformly from the training data {z_i}_i (it is not clear from the description on page 3 that this uniformly sampled z* = z_i depends on the actual x* -- as far as I can tell it does not). The paper does suggest learning a p(z|x) but does not provide implementation details nor experiment with this approach.
I like the idea of learning an "output stochastic" model -- it is much simpler to train than an "input stochastic" model that is more standard in the literature (VAE, GAN) and there are many cases where I think it could be quite reasonable. However, I don't think the authors explore the idea well enough -- they simply appear to propose a non-parametric way of learning the stochastic model (sampling from the training data z_i's) and do not compare to reasonable alternative approaches. To start, why not plot the empirical histogram of p(z|x) (for some fixed x's) to get a sense of how well-behaved it is as a distribution. Second, why not simply propose learning exponential family models where the parameters of these models are (deep nets) conditioned on the input? One could even start with a simple Gaussian and linear parameterization of the mean and variance in terms of x. If the contribution of the paper is the "output stochastic" noise model, I think it is worth experimenting with the design options one has with such a model.
The experiments range over 4 video datasets. PSNR is evaluated on predicted frames -- PSNR does not appear to be explicitly defined but I am taking it to be the metric defined in the 2nd paragraph from the bottom on page 7. The new model "EEN" is compared to a deterministic model and conditional GAN. The GAN never seems to perform well -- the authors claim mode collapse, but I wonder if the GAN was simply hard to train in the first place and this is the key reason? Unsurprisingly (since the EEN noise does not seem to be conditioned on the input), the baseline deterministic model performs quite well. If I understand what is being evaluated correctly (i.e., best random guess) then I am not surprised the EEN can perform better with enough random samples. Have we learned anything? |
iclr_2018_B16yEqkCZ | Many practical reinforcement learning problems contain catastrophic states that the optimal policy visits infrequently or never. Even on toy problems, deep reinforcement learners periodically revisit these states, once they are forgotten under a new policy. In this paper, we introduce intrinsic fear, a learned reward shaping that accelerates deep reinforcement learning and guards oscillating policies against periodic catastrophes. Our approach incorporates a second model trained via supervised learning to predict the probability of imminent catastrophe. This score acts as a penalty on the Q-learning objective. Our theoretical analysis demonstrates that the perturbed objective yields the same average return under strong assumptions and an -close average return under weaker assumptions. Our analysis also shows robustness to classification errors. Equipped with intrinsic fear, our DQNs solve the toy environments and improve on the Atari games Seaquest, Asteroids, and Freeway. | The paper addresses the problem of learners forgetting rare states and revisiting catastrophic danger states. The authors propose to train a predictive ‘fear model’ that penalizes states that lead to catastrophes. The proposed technique is validated both empirically and theoretically.
Experiments show a clear advantage during learning when compared with a vanilla DQN. Nonetheless, there are some criticisms than can be made of both the method and the evaluations:
The fear radius threshold k_r seems to add yet another hyperparameter that needs tuning. Judging from the description of the experiments this parameter is important to the performance of the method and needs to be set experimentally. There seems to be no way of a priori determine a good distance as there is no way to know in advance when a catastrophe becomes unavoidable. No empirical results on the effect of the parameter are given.
The experimental results support the claim that this technique helps to avoid catastrophic states during initial learning.The paper however, also claims to address the longer term problem of revisiting these states once the learner forgets about them, since they are no longer part of the data generated by (close to) optimal policies. This problem does not seem to be really solved by this method. Danger and safe state replay memories are kept, but are only used to train the catastrophe classifier. While the catastrophe classifier can be seen as an additional external memory, it seems that the learner will still drift away from the optimal policy and then need to be reminded by the classifier through penalties. As such the method wouldn’t prevent catastrophic forgetting, it would just prevent the worst consequences by penalizing the agent before it reaches a danger state. It would therefore be interesting to see some long running experiments and analyse how often catastrophic states (or those close to them) are visited.
Overall, the current evaluations focus on performance and give little insight into the behaviour of the method. The paper also does not compare to any other techniques that attempt to deal with catastrophic forgetting and/or the changing state distribution ([1,2]).
In general the explanations in the paper often often use confusing and imprecise language, even in formal derivations, e.g. ‘if the fear model reaches arbitrarily high accuracy’ or ‘if the probability is negligible’.
It is wasn’t clear to me that the properties described in Theorem 1 actually hold. The motivation in the appendix is very informal and no clear derivation is provided. The authors seem to indicate that a minimal return can be guaranteed because the optimal policy spends a maximum of epsilon amount of time in the catastrophic states and the alternative policy simply avoids these states. However, as the alternative policy is learnt on a different reward, it can have a very different state distribution, even for the non-catastrophics states. It might attach all its weight to a very poor reward state in an effort to avoid the catastrophe penalty. It is therefore not clear to me that any claims can be made about its performance without additional assumptions.
It seems that one could construct a counterexample using a 3-state chain problem (no_reward,danger, goal) where the only way to get to the single goal state is to incur a small risk of visiting the danger state. Any optimal policy would therefore need to spend some time e in the danger state, on average. A policy that learns to avoid the danger state would then also be unable to reach the goal state and receive rewards. E.g pi* has stationary distribution (0,e,1-e) and return 0*0+e*Rmin + (1-e)*Rmax. By adding a sufficiently high penalty, policy pi~ can learn to avoid the catastrophic state with distribution (1,0,0) and then gets return 1*0+ 0*Rmin+0*Rmax= 0 < n*_M - e (Rmax - Rmin) = e*Rmin + (1-e)*Rmax - e (Rmax - Rmin). This seems to contradict the theorem. It wasn’t clear what assumptions the authors make to exclude situations like this.
[1] T. de Bruin, J. Kober, K. Tuyls and R. Babuška, "Improved deep reinforcement learning for robotics through distribution-based experience retention," 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, 2016, pp. 3947-3952.
[2] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., ... & Hassabis, D. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 201611835. |
iclr_2018_S1JHhv6TW | Published as a conference paper at ICLR 2018 BOOSTING DILATED CONVOLUTIONAL NETWORKS WITH MIXED TENSOR DECOMPOSITIONS
The driving force behind deep networks is their ability to compactly represent rich classes of functions. The primary notion for formally reasoning about this phenomenon is expressive efficiency, which refers to a situation where one network must grow unfeasibly large in order to replicate functions of another. To date, expressive efficiency analyses focused on the architectural feature of depth, showing that deep networks are representationally superior to shallow ones. In this paper we study the expressive efficiency brought forth by connectivity, motivated by the observation that modern networks interconnect their layers in elaborate ways. We focus on dilated convolutional networks, a family of deep models delivering state of the art performance in sequence processing tasks. By introducing and analyzing the concept of mixed tensor decompositions, we prove that interconnecting dilated convolutional networks can lead to expressive efficiency. In particular, we show that even a single connection between intermediate layers can already lead to an almost quadratic gap, which in large-scale settings typically makes the difference between a model that is practical and one that is not. Empirical evaluation demonstrates how the expressive efficiency of connectivity, similarly to that of depth, translates into gains in accuracy. This leads us to believe that expressive efficiency may serve a key role in developing new tools for deep network design. | (Emergency review—I have no special knowledge of the subfield, and I was told a cursory review was OK, but the paper was fascinating and I ended up reading fairly carefully)
This paper does many things. It adds to a series of publications that analyze deep network architectures as parameterized decompositions of intractably large tensors (themselves the result of discretizing the entire input-output space of the network), this time focusing on the WaveNet architecture for autoregressive sequence modeling. It shows (first theoretically, then empirically) that the WaveNet's structural assumption of a single (perfect) binary tree is holding it back, and that WaveNet-like architectures with more complex mixed tree structures perform better.
Throughout the subject is treated with a high level of mathematical rigor, while relegating proofs and detailed walkthrough explanations to lengthy appendices which I did not have time to review.
Some things I noticed:
- The notation used is mostly consistent, except for some variation between dots (e.g., in Eq. 2) and bra-kets (in Fig. 1) for inner product. While I think I'm in the minority here, I'd personally be comfortable with going a little bit further with index notation and avoiding the cohabitation of tensor and vector notation styles by using indices even for dot products; that said, either kind of vector notation (dots or brakets) is certainly acceptable too.
- There are a couple more nomenclature things that might trip up those of us in the deep learning crowd—we're used to referring to "axes" or "dimensions" of a tensor, but the tensor-analysis world apparently says "modes" (and this is called out once in a parenthetical). As "dimension" means something different to tensor folks (what DLers usually call the "size" of an axis), perhaps standardizing on the shared term "axes" would be worthwhile? Not sure if there's a distinction in the tensor world between the words "axis" and "mode."
- The baseline WaveNet is only somewhat well described as "convolutional;" the underlying network unit is not always a "size-2 convolution" (except for certain values of g) and the "size-1 convolutions" that make it up are simply linear transformations. While the WaveNet derives from convolutional sequence architectures (and the choices of g explored in the original paper derive from the CNN literature) it has at least as much in common with recursive/tree-structured network architectures like TreeLSTMs and RNTNs. In fact, the WaveNet is a special case of a recursive neural network with a particular composition function *and a fixed (perfect) binary tree structure.* As this last condition is relaxed in the present paper, making the space of networks under analysis more similar to the traditional space of recursive NNs, it might be worth mentioning this "alternative history" of the WaveNet.
- The choice of mixture nodes in Fig. 3 is a little unfortunate, because it includes all possible mixture nodes and doesn't make it as clear as the text does that a subset of these nodes can be chosen in the general case.
- While I couldn't follow some of Section 5, I'm a little confused that Theorem 1 appears at first glance to apply only to a non-generalized decomposition (a specific choice of g).
- Caffe would not have been my first choice for such a complex, hierarchically structure architecture; I imagine it forced the authors to write a significant amount of custom code. |
iclr_2018_ryb83alCZ | Deep generative models have advanced the state-of-the-art in semi-supervised classification, however their capacity for deriving useful discriminative features in a completely unsupervised fashion for classification in difficult real-world data sets, where adequate manifold separation is required has not been adequately explored. Most methods rely on defining a pipeline of deriving features via generative modeling and then applying clustering algorithms, separating the modeling and discriminative processes. We propose a deep hierarchical generative model which uses a mixture of discrete and continuous distributions to learn to effectively separate the different data manifolds and is trainable end-to-end. We show that by specifying the form of the discrete variable distribution we are imposing a specific structure on the model's latent representations. We test our model's discriminative performance on the task of chronic lymphocytic leukemia (CLL) diagnosis against baselines from the field of computational flow cytometry (FC), as well as the Variational Autoencoder literature. | Summary
The authors propose a hierarchical generative model with both continuous and discrete latent variables. The authors empirically demonstrate that the latent space of their model separates well healthy vs pathological cells in a dataset for Chronic lymphocytic leukemia (CLL) diagnostics.
Main
Overall the paper is reasonably well written. There are a few clarity issues detailed below.
The results seem very promising as the model clearly separates the two types of cells. But more baseline experiments are needed to assess the robustness of the results.
Novelty
The model introduced is a variant of a deep latent Gaussian model, where the top-most layer is a discrete random variable. Furthermore, the authors employ the Gumbel-trick to avoid having to explicitly marginalize the discrete latent variables.
Given the extensive literature on combining discrete and continuous latent variables in VAEs, the novelty factor of the proposed model is quite weak.
The authors use the Gumbel-trick in order to avoid explicit marginalization over the discrete variables. However, the number of categories in their problem is small (n=2), so the computational overhead of an explicit marginalization would be negligible. The result would be equivalent to replacing the top of the model p(y) p(z_L|y) by a GMM p_{GMM}(z_L) with two Gaussian components only.
Give these observations, it seems that this is an unnecessary complication added to the model as an effort to increase novelty.
It would be very informative to compare both approaches.
I would perhaps recommend this paper for an applied workshop, but not for publication in a main conference.
Details:
1) Variable h was not defined before it appeared in Eq. (5). From the text/equations we can deduce h = (y, z_1, …, z_L), but this should be more clearly stated.
2) It is counter-intuitive to define the inference model before having defined the generative model structure, perhaps the authors should consider changing the presentation order.
3) Was the VAE in VAE+SVM also trained with lambda-annealing?
4) How does a simple MLP classifier compares to the models on Table 1 and 2?
5) It seems that, what is called beta-VAE here is the same model HCDVAE but trained with a lambda that anneals to a value different than one (the value of beta). In this case what is the value it terminates? How was that value chosen?
6) The authors used 3 stochastic layers, how was that decided? Is there a substantial difference in performance compared to 1 and 2 stochastic layers?
7) How do the different models behave in terms train vs test set likelihoods. Was there overfitting detected for some settings? How does the choice of the MCC threshold affects train/test likelihoods?
8) Have the authors compared explicit marginalizing y with using the Gumbel-trick?
Other related work:
A few other papers that have explored discrete latent variables as a way to build more structured VAEs are worth mentioning/referring to:
[1] Dilokthanakul N, Mediano PA, Garnelo M, Lee MC, Salimbeni H, Arulkumaran K, Shanahan M. Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648. 2016 Nov 8.
[2] Goyal P, Hu Z, Liang X, Wang C, Xing E. Nonparametric Variational Auto-encoders for Hierarchical Representation Learning. arXiv preprint arXiv:1703.07027. 2017 Mar 21. |
iclr_2018_BJ78bJZCZ | Recurrent Neural Networks architectures excel at processing sequences by modelling dependencies over different timescales. The recently introduced Recurrent Weighted Average (RWA) unit captures long term dependencies far better than an LSTM on several challenging tasks. The RWA achieves this by applying attention to each input and computing a weighted average over the full history of its computations. Unfortunately, the RWA cannot change the attention it has assigned to previous timesteps, and so struggles with carrying out consecutive tasks or tasks with changing requirements. We present the Recurrent Discounted Attention (RDA) unit that builds on the RWA by additionally allowing the discounting of the past. We empirically compare our model to RWA, LSTM and GRU units on several challenging tasks. On tasks with a single output the RWA, RDA and GRU units learn much quicker than the LSTM and with better performance. On the multiple sequence copy task our RDA unit learns the task three times as quickly as the LSTM or GRU units while the RWA fails to learn at all. On the Wikipedia character prediction task the LSTM performs best but it followed closely by our RDA unit. Overall our RDA unit performs well and is sample efficient on a large variety of sequence tasks. | The authors present RDA, the Recurrent Discounted Attention unit, that improves upon RWA, the earlier introduced Recurrent Weighted Average unit, by adding a discount factor. While the RWA was an interesting idea with bad results (far worse than the standard GRU or LSTM with standard attention except for hand-picked tasks), the RDA brings it more on-par with the standard methods.
On the positive side, the paper is clearly written and adding discount to RWA, while a small change, is original. On the negative side, in almost all tasks the RDA is on par or worse than the standard GRU - except for MultiCopy where it trains faster, but not to better results and it looks like the difference is between few and very-few training steps anyway. The most interesting result is language modeling on Hutter Prize Wikipedia, where RDA very significantly improves upon RWA - but again, only matches a standard GRU or LSTM. So the results are not strongly convincing, and the paper lacks any mention of newer work on attention. This year strong improvements over state-of-the-art have been achieved using attention for translation ("Attention is All You Need") and image classification (e.g., Non-local Neural Networks, but also others in ImageNet competition). To make the evaluation convincing enough for acceptance, RDA should be combined with those models and evaluated more competitively on multiple widely-studied tasks. |
iclr_2018_Sk1NTfZAb | Large-scale publicly available datasets play a fundamental role in training deep learning models. However, large-scale datasets are difficult to collect in problems that involve processing of sensitive information. Collaborative learning techniques provide a privacy-preserving solution in such cases, by enabling training over a number of private datasets that are not shared by their owners. Existing collaborative learning techniques, combined with differential privacy, are shown to be resilient against a passive adversary which tries to infer the training data only from the model parameters. However, recently, it has been shown that the existing collaborative learning techniques are vulnerable to an active adversary that runs a GAN attack during the learning phase. In this work, we propose a novel keybased collaborative learning technique that is resilient against such GAN attacks. For this purpose, we present a collaborative learning formulation in which class scores are protected by class-specific keys, and therefore, prevents a GAN attack. We also show that very high dimensional class-specific keys can be utilized to improve robustness against attacks, without increasing the model complexity. Our experimental results on two popular datasets, MNIST and AT&T Olivetti Faces, demonstrate the effectiveness of the proposed technique against the GAN attack. To the best of our knowledge, the proposed approach is the first collaborative learning formulation that effectively tackles an active adversary, and, unlike model corruption or differential privacy formulations, our approach does not inherently feature a trade-off between model accuracy and data privacy. | In this paper, the authors proposed a counter measure to protect collaborative training of DNN against the GAN attack in (Hitaj et al. 2017). The motivation of the paper is clear and so is the literature review. But for me the algorithm is not clearly defined and it is difficult to evaluate how the proposed procedure works. I am not saying that this is not the solution. I am just saying that the paper is not clear enough to say that it is (or it is not). From, my perspective this will make the paper a clear reject.
I think the authors should explain a few things more clearly in order to make the paper foolproof. The first one seems to me the most clear problem with the approach proposed in the paper:
1 $\psi(c)$ defines the mapping from each class to a high dimensional vector that allows protection against the GAN attack. $\psi(c)$ is suppose to be private for each class (or user if each class belong only to one user). This is the key aspect in the paper. But if more than one user have the same class they will need to share this key. Furthermore, at test time, these keys need to be known by everyone, because the output of the neural network needs to be correlated against all keys to see which is the true label. Of course the keys can only be released after the training is completed. But the adversary can also claim to have examples from the class it is trying to attack and hence the legitimate user that generated the key will have to give the attacker the key from the training phase. For example, let assume the legitimate user only has ones from MNIST and declares that it only has one class. The attacker says it has two classes the same one that the legitimate user and some other label. In this case the legitimate user needs to share $\psi(c)$ with the attacker. Of course this sounds “fishy” and might be a way of finding who the attacker is, but there might be many cases in which it makes sense that two or more users shares the same labels and in a big system might be complicated to decide who has access to which key.
2 I do not understand the definition of $\phi(x)$. Is this embedding fixed for each user? Is this embedding the DNN? In Eq. 4 I would assume that $\phi(x)$ is the DNN and that it should be $\phi_\theta(x)$, because otherwise the equation does not make sense. But this is not clearly explained in the paper and Eq 4 makes no sense at all. In a way the solution to the maximization in Eq 4 is Theta=\infty. Also the term $\phi(x)$ is not mentioned in the paper after page 5. My take is that the authors want to maximize the inner product, but then the regularizer should go the other way around.
3 In the paper in page 5 we can read: “Here, we emphasize the first reason why it is important to use l2-normalized class keys and embedding outputs: in this manner, the resulting classification score is by definition restricted to the range [-1; +1],” If I understand correctly the authors are dividing the inner product by ||$\psi(c)|| ||$\phi(x)||. I can see that we can easily divide by ||$\psi(c)||, but I cannot see how we can do dive by ||$\phi(x)||, if this term depends on \theta. If this term does not depend on \theta, then Eq 4 does not make sense.
To summarize, I have the impression that there are many elements in the paper that does not makes sense in the way that they are explained and that the authors need to tell the paper in a way that can be easily understood and replicated. I recommend the authors to run the paper by someone in their circle that could help them rewrite the paper in a way that is more accessible. |
iclr_2018_rkxY-sl0W | Workshop track -ICLR 2018 TREE-TO-TREE NEURAL NETWORKS FOR PROGRAM TRANSLATION
Program translation is an important tool to migrate legacy code in one language into an ecosystem built in a different language. In this work, we are the first to consider employing deep neural networks toward tackling this problem. We observe that program translation is a modular procedure, in which a sub-tree of the source tree is translated into the corresponding target sub-tree at each step. To capture this intuition, we design a tree-to-tree neural network as an encoderdecoder architecture to translate a source tree into a target one. Meanwhile, we develop an attention mechanism for the tree-to-tree model, so that when the decoder expands one non-terminal in the target tree, the attention mechanism locates the corresponding sub-tree in the source tree to guide the expansion of the decoder. We evaluate the program translation capability of our tree-to-tree model against several state-of-the-art approaches. Compared against other neural translation models, we observe that our approach is consistently better than the baselines with a margin of up to 15 points. Further, our approach can improve the previous state-of-the-art program translation approaches by a margin of 20 points on the translation of real-world projects. | This paper aims to translate source code from one programming language to another using
a neural network architecture that maps trees to trees. The encoder uses an upward pass of
a Tree LSTM to compute embeddings for each subtree of the input, and then the decoder
constructs a tree top-down. As nodes are created in the decoder, a hidden state is passed
from parents to children via an LSTM (one for left children, one for right children), and
an attention mechanism allows nodes in the decoder to attend to subtrees in the encoder.
Experimentally, the model is applied to two synthetic datasets, where programs in the
source domain are sampled from a PCFG and then translated to the target domain with a
hand-coded translator. The model is then trained on these pairs. Results show that the
nproposed approach outperforms sequence representations or serialized tree representations
of inputs and outputs.
Pros:
- Nice model which seems to perform well.
- Reasonably clear explanation.
A couple questions about the model:
- the encoder uses only bottom-up information to determine embeddings of subtrees. I wonder
if top-down information would create embeddings with more useful information for the attention
in the decoder to pick up on.
- I would be interested to know more details about how the hand-coded translator works. Does
it work in a context-free, bottom-up fashion? That is, recursively translate two children nodes
and then compute the translation of the parent as a function of the parent node and
translations of the two children? If so, I wonder what is missing from the proposed model
that makes it unable to perfectly solve the first task?
Cons:
- Only evaluated on synthetic programs, and PCFGs are known to generate unrealistic programs,
so we can only draw limited conclusions from the results.
- The paper overstates its novelty and doesn't properly deal with related work (see below)
The paper overstates its novelty and has done a poor job researching related work.
Statements like "We are the first to consider employing neural network approaches
towards tackling the problem [of translating between programming languages]" are
obviously not true (surely many people have *considered* it), and they're particularly
grating when the treatment of related work is poor, as it is in this paper. For example,
there are several papers that frame the code migration problem as one of statistical
machine translation (see Sec 4.4 of [1] for a review and citations), but this paper
makes no reference to them. Further, [2] uses distributed representations for the purpose
of code migration, which I would call a "neural network approach," so there's not any
sense that I can see in which this statement is true. The paper further says, "To the best
of our knowledge, this is the first tree-to-tree neural network architecture in the
literature." This is worded better, but it's definitely not the first tree-to-tree
neural network. See, e.g., [3, 4, 5], one of which is cited, so I'm confused about
this claim.
In total, the model seems clean and somewhat novel, but it has only been tested on
unrealistic synthetic data, the framing with respect to related work is poor, and the
contributions are overstated.
[1] https://arxiv.org/abs/1709.06182
[2] Trong Duc Nguyen, Anh Tuan Nguyen, and Tien N Nguyen. 2016b. Mapping API elements for code migration with
vector representations. In Proceedings of the International Conference on Software Engineering (ICSE).
[3] Socher, Richard, et al. "Semi-supervised recursive autoencoders for predicting sentiment distributions." Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, 2011.
[4] https://arxiv.org/abs/1703.01925
[5] Parisotto, Emilio, et al. "Neuro-symbolic program synthesis." arXiv preprint arXiv:1611.01855 (2016). |
iclr_2018_rk3b2qxCW | In recent years deep reinforcement learning has been shown to be adept at solving sequential decision processes with high-dimensional state spaces such as in the Atari games. Many reinforcement learning problems, however, involve highdimensional discrete action spaces as well as high-dimensional state spaces. In this paper, we develop a novel policy gradient methodology for the case of large multidimensional discrete action spaces. We propose two approaches for creating parameterized policies: LSTM parameterization and a Modified MDP (MMDP) giving rise to Feed-Forward Network (FFN) parameterization. Both of these approaches provide expressive models to which backpropagation can be applied for training. We then consider entropy bonus, which is typically added to the reward function to enhance exploration. In the case of high-dimensional action spaces, calculating the entropy and the gradient of the entropy requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible. We develop several novel unbiased estimators for the entropy bonus and its gradient. Finally, we test our algorithms on two environments: a multi-hunter multi-rabbit grid game and a multi-agent multi-arm bandit problem. | In this paper, the authors suggest introducing dependencies between actions in RL settings with multi-dimensional action spaces by way of two mechanisms (using an RNN and making partial action specification as part of the state); they then introduce entropy pseudo-rewards whose maximization corresponding to joint entropy maximization.
In general, the multidimensional action methods either seem incremental or non novel to me. The combined use of the chain rule and RNNs (LSTM or not) to induce correlations in multi-dimensional outputs is well know (sequence-to-sequence networks, pixelRNN, etc.) and the extension to RL presents no difficulties, if it is not already known. Note very related work in https://arxiv.org/pdf/1607.07086.pdf and https://www.media.mit.edu/projects/improving-rnn-sequence-generation-with-rl/overview/ .
As for the MMDP technique, I believe it is folklore (it can for instance be found as a problem in a problem set - http://stellar.mit.edu/S/course/2/sp04/2.997/courseMaterial/topics/topic2/readings/problemset4/problemset4.pdf). Note that both approaches could be combined; the first idea is essentially a policy method, the second, a value method. The second method could be used to provide stronger, partial action-conditional baselines (or even critics) to the first method.
The entropy derivation are more interesting - and the smoothed entropy technique is as far as I know, novel. The experiments are well done, though on simple toy environments.
Minor:
- In section 3.2, one should in principle tweak the discount factor of the modified MDP to recover behavior identical to the original one with large action space. This should be noted (alternatively, the discount between non-environment transitions should be set to 1).
- From the description at the end of 3.2, and figure 1.b, it seems actions fed to the MMDP feed-forward network are not one-
hot; I thought this was pretty surprising as it would almost certainly affect performance? Note also that the collection of feed-forward network which collectively output the joint vector can be thought of as an RNN with non-learned state transition.
- Since the function optimized can be written as an expectation of reward+pseudo-reward, the proof of theorem 4 can be simplified by using generic score-function optimization arguments (see Stochastic Computation Graphs, Schulman et al). |
iclr_2018_S1D8MPxA- | VITERBI-BASED PRUNING FOR SPARSE MATRIX WITH FIXED AND HIGH INDEX COMPRESSION RATIO
Weight pruning has proven to be an effective method of reducing the model size and computation cost without sacrificing its model accuracy. Conventional sparse matrix formats, however, involve irregular index structures with large storage requirement and a sequential reconstruction process, resulting in inefficient use of highly parallel computing resources. Hence, pruning is usually restricted to inference with a batch size of one, for which an efficient parallel matrix-vector multiplication method exists. In this paper, a new class of sparse matrix representation is proposed utilizing the Viterbi algorithm that has a high, and more importantly, fixed index compression ratio regardless of the pruning rate. In this approach, numerous sparse matrix candidates are first generated by the Viterbi encoder, and the candidate that aims to minimize the model accuracy degradation is then selected by the Viterbi algorithm. The model pruning process based on the proposed Viterbi encoder and Viterbi algorithm is highly parallelizable, and can be implemented efficiently in hardware to achieve low-energy and a high-performance index decoding process. Compared with the existing magnitude-based pruning methods, the index data storage requirement can be further compressed by 85.2% in MNIST and 83.9% in AlexNet while achieving a similar pruning rate. Even compared with the relative index compression technique, our method can still reduce the index storage requirement by 52.7% in MNIST and 35.5% in AlexNet. | The paper proposes VCM, a novel way to store sparse matrices that is based on the Viterbi Decompressor. Only a subset of sparse matrices can be represented in the VCM format, however, unlike CSR format, it allows for faster parallel decoding and requires much less index space. The authors also propose a novel method of pruning of neural network that constructs an (sub)optimal (w.r.t. a weight magnitude based loss) Viterbi-compressed matrix given the weights of a pretrained DNN.
VCM is an interesting analog to the conventional CSR format that may be more computationally efficient given particular software and/or hardware implementations of the Viterbi Decompressor. However, the empirical study of possible acceleration remains as an open question.
However, I have a major concern regarding the efficiency of the pruning procedure. Authors report practically the same level of sparsity, as the pruning procedure from the Deep Compression paper. Both the proposed Viterbi-based pruning, and Deep Compression pruning belong to the previous era of pruning methods. They separate the pruning procedure and the training procedure, so that the model is not trained end-to-end. However, during the last two years a lot of new adaptive pruning methods have been developed, e.g. Dynamic Network Surgery, Soft Weight Sharing, and Sparse Variational DropOut. All of them in some sense incorporate the pruning procedure into the training procedure and achieve a much higher level of sparsity (e.g. DC achieves ~13x compression of LeNet5, and SVDO achieves ~280x compression of the same network). Therefore the reported 35-50% compression of the index storage is not very significant.
It is not clear whether it is possible to take a very sparse matrix and transform it into the VCM format without a high accuracy degradation. It is also not clear whether the VCM format would be efficient for storage of extremely sparse matrices, as they would likely be more sensitive to the mismatch of the original sparsity mask, and the best possible VCM sparsity mask. Therefore I’m concerned whether it would be possible to achieve a close-to-SotA level of compression using this method, and it is not yet clear whether this method can be used for practical acceleration or not.
The paper presents an interesting idea that potentially has useful applications, however the experiments are not convincing enough. |
iclr_2018_B1p461b0W | Deep neural networks trained on large supervised datasets have led to impressive results in recent years. However, since well-annotated datasets can be prohibitively expensive and time-consuming to collect, recent work has explored the use of larger but noisy datasets that can be more easily obtained. In this paper, we investigate the behavior of deep neural networks on training sets with massively noisy labels. We show on multiple datasets such as MINST, CIFAR-10 and ImageNet that successful learning is possible even with an essentially arbitrary amount of noise. For example, on MNIST we find that accuracy of above 90 percent is still attainable even when the dataset has been diluted with 100 noisy examples for each clean example. Such behavior holds across multiple patterns of label noise, even when noisy labels are biased towards confusing classes. Further, we show how the required dataset size for successful training increases with higher label noise. Finally, we present simple actionable techniques for improving learning in the regime of high label noise. | The paper makes a bold claim, that deep neural networks are robust to arbitrary level of noise. It also implies that this would be true for any type of noise, and support this later claim using experiments on CIFAR and MNIST with three noise types: (1) uniform label noise (2) non-uniform but image-independent label noise, which is named "structured noise", and (3) Samples from out-of-dataset classes. The experiments show robustness to these types of noise.
Review:
The claim made by the paper is overly general, and in my own experience incorrect when considering real-world-noise. This is supported by the literature on "data cleaning" (partially by the authors), a procedure which is widely acknowledged as critical for good object recognition. While it is true that some image-independent label noise can be alleviated in some datasets, incorrect labels in real world datasets can substantially harm classification accuracy.
It would be interesting to understand the source of the difference between the results in this paper and the more common results (where label noise damages recognition quality). The paper did not get a chance to test these differences, and I can only raise a few hypotheses. First, real-world noise depends on the image and classes in a more structured way. For instance, raters may confuse one bird species from a similar one, when the bird is photographed from a particular angle. This could be tested experimentally, for example by adding incorrect labels for close species using the CUB data for fine-grained bird species recognition. Another possible reason is that classes in MNIST and CIFAR10 are already very distinctive, so are more robust to noise. Once again, it would be interesting for the paper to study why they achieve robustness to noise while the effect does not hold in general.
Without such an analysis, I feel the paper should not be accepted to ICLR because the way it states its claim may mislead readers.
Other specific comments:
-- Section 3.4 the experimental setup, should clearly state details of the optimization, architecture and hyper parameter search. For example, for Conv4, how many channels at each layer? how was the net initialized? which hyper parameters were tuned and with which values? were hyper parameters tuned on a separate validation set? How was the train/val/test split done, etc. These details are useful for judging technical correctness.
-- Section 4, importance of large datasets. The recent paper by Chen et al (2017) would be relevant here.
-- Figure 8 failed to show for me.
-- Figure 9,10, need to specify which noise model was used. |
iclr_2018_BJjBnN9a- | This paper introduces the concept of continuous convolution to neural networks and deep learning applications in general. Rather than directly using discretized information, input data is first projected into a high-dimensional Reproducing Kernel Hilbert Space (RKHS), where it can be modeled as a continuous function using a series of kernel bases. We then proceed to derive a closed-form solution to the continuous convolution operation between two arbitrary functions operating in different RKHS. Within this framework, convolutional filters also take the form of continuous functions, and the training procedure involves learning the RKHS to which each of these filters is projected, alongside their weight parameters. This results in much more expressive filters, that do not require spatial discretization and benefit from properties such as adaptive support and non-stationarity. Experiments on image classification are performed, using classical datasets, with results indicating that the proposed continuous convolutional neural network is able to achieve competitive accuracy rates with far fewer parameters and a faster convergence rate. | The paper introduces the notion of continuous convolutional neural networks.
The main idea of the paper is to project examples into an RK Hilbert space
and performs convolution and filtering into that space. Interestingly, the
filters defined in the Hilbert space have parameters that are learnable.
While the idea may be novel and interesting, its motivation is not clear for
me. Is it for space? for speed? for expressivity of hypothesis spaces?
Most data that are available for learning are in discrete forms and hopefully,
they have been digitalized according to Shannon theory. This means that they bring
all necessary information for rebuilding their continuous counterpart. Hence, it is
not clear why projecting them back into continuous functions is of interest.
Another point that is not clear or at least misleading is the so-called Hilbert Maps.
As far as I understand, Equation (4) is not an embedding into an Hilbert space but
is more a proximity space representation [1]. Hence, the learning framework of the
authors can be casted more as a learning with similarity function than learning
into a RKHS [2]. A proper embedding would have mapped $x$ into a function
belonging to $\mH$. In addition, it seems that all computations are done
into a \ell^2 space instead of in the RKHS (equations 5 and 11).
Learning good similarity functions is also not novel [3] and Equations
(6) and (7) corresponds to learning these similarity functions.
As far as I remember, there exists also some paper from the nineties that
learn the parameters of RBF networks but unfortunately I have not been able to
google some of them.
Part 3 is the most interesting part of the paper, however it would have been
great if the authors provide other kernel functions with closed-form convolution
formula that may be relevant for learning.
The proposed methodology is evaluated on some standard benchmarks in vision. While
results are pretty good, it is not clear how the various cluster sets have been obtained
and what are their influence on the performances (if they are randomly initialized, it
would be great to see standard deviation of performances with respect to initializations).
I would also be great to have intuitions on why a single continuous filter works betters
than 20 discrete ones (if this behaviour is consistent accross initialization).
On the overall, while the idea may be of interested, the paper lacks in motivations
in connecting to relevant previous works and in providing insights on why it works.
However, performance results seem to be competitive and that's the reader may
be eager for insights.
minor comments
---------------
* the paper employs vocabulary that is not common in ML. eg. I am not sure what
occupancy values, or inducing points are.
* Supposingly that the authors properly consider computation in RKHS, then \Sigma_i
should be definite positive right? how update in (7) is guaranteed to be DP?
This constraints may not be necessary if instead they used proximity space representation.
[1] https://alex.smola.org/papers/1999/GraHerSchSmo99.pdf
[2] https://www.cs.cmu.edu/~avrim/Papers/similarity-bbs.pdf
[3] A. Bellet, A. Habrard and M. Sebban. Similarity Learning for Provably Accurate Sparse Linear Classification. |
iclr_2018_SyZI0GWCZ | DECISION-BASED ADVERSARIAL ATTACKS: RELIABLE ATTACKS AGAINST BLACK-BOX MACHINE LEARNING MODELS
Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class probabilities (score-based attacks), neither of which are available in most real-world scenarios. In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against. Here we emphasise the importance of attacks which solely rely on the final model decision. Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient-or score-based attacks. Previous attacks in this category were limited to simple models or simple datasets. Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial. The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet. We apply the attack on two black-box algorithms from Clarifai.com. The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems. An implementation of the attack is available as part of Foolbox (https://github.com/bethgelab/foolbox). | This is a nice paper proposing a simple but effective heuristic for generating adversarial examples from class labels with no gradient information or class probabilities. Highly relevant prior work was overlooked and there is no theoretical analysis, but I think this paper still makes a valuable contribution worth sharing with a broader audience.
What this paper does well:
- Suggests a type of attack that hasn't been applied to image classifiers
- Proposes a simple heuristic method for performing this attack
- Evaluates the attack on both benchmark neural networks and a commercial system
Problems and limitations:
1. No theoretical analysis. Under what conditions does the boundary attack succeed or fail? What geometry of the classification boundaries is necessary? How likely are those conditions to hold? Can we measure how well they hold on particular networks?
Since there is no theoretically analysis, the evidence for effectiveness is entirely empirical. That weakens the paper and suggests an important area of future work, but I think the empirical evidence is sufficient to show that there's something interesting going. Not a fatal flaw.
2. Poor framing. The paper frames the problem in terms of "machine learning models" in general (beginning with the first line of the abstract), but it only investigates image classification. There's no particular reason to believe that all machine learning algorithms will behave like convolutional neural network image classifiers. Thus, there's an implicit claim of generality that is not supported.
This is a presentation issue that is easily fixed. I suggest changing the title to reflect this, or at least revising the abstract and introduction to make the scope clearer.
A minor presentation quibble/suggestion: "adversarial" is used in this paper to refer to any class that differs from the true class of the instance to be disguised. But an image of a dalmation that's labeled as a dalmation isn't adversarial -- it's just a different image that's labeled correctly. The adversarial process is about constructing something that will be mislabeled, exploiting some kind of weakness that doesn't show up on a natural distribution of inputs. I suggest rewording some of the mentions of adversarial.
3. Ignorance of prior work. Finding deceptive inputs using only the classifier output has been done by Lowd and Meek (KDD 2005) for linear classifiers and Nelson et al. (AISTATS 2010, JMLR 2012) for convex-inducing classifiers. Both works include theoretical bounds on the number of queries required for near-optimal adversarial examples. Biggio et al. (ECML 2013) further propose training a surrogate classifier on similar training data, using the predictions of the target classifier to relabel the training data. In this way, decision information from the target model is used to help train a more similar surrogate, and then attacks can be transferred from the surrogate to the target.
Thus, "decision-based attacks" are not new, although the algorithm and experiments in this paper are.
Overall, I think this paper makes a worthwhile contribution, but needs to revise the claims to match what's done in the paper and what's been done before. |
iclr_2018_SkERSm-0- | Variational Autoencoder plays an important role in disentangled representation learning. However, it is found facing posterior collapse problem and learning multiple variants in one factor. What would be learned by variational autoencoder (VAE) and what influence the disentanglement of VAE? This paper tries to preliminarily address VAE's intrinsic dimension, real factor, disentanglement and indicator issues theoretically in the idealistic situation and implementation issue practically through noise modeling perspective in the realistic case. On intrinsic dimension issue, due to information conservation, the idealistic VAE learns and only learns intrinsic factor dimension. Besides, suggested by mutual information separation property, the constraint induced by Gaussian prior to the VAE objective encourages the information sparsity in dimension. On disentanglement issue, subsequently, inspired by information conservation theorem the clarification on disentanglement in this paper is made. On real factor issue, due to factor equivalence, the idealistic VAE possibly learns any factor set in the equivalence class. On indicator issue, the behavior of current disentanglement metric is discussed, and several performance indicators regarding the disentanglement and generating influence are subsequently raised to evaluate the performance of VAE model and to supervise the used factors. On implementation issue, the experiments under noise modeling and constraints empirically testify the theoretical analysis and also show their own characteristic in pursuing disentanglement. | This paper proposes to modify how noise factors are treated when developing VAE models. For example, the original VAE work from (Kingma and Welling, 2013) applies a deep network to learn a diagonal approximation to the covariance on the decoder side. Subsequent follow-up papers have often simplified this covariance to sigma^2*I, where sigma^2 is assumed to be known or manually tuned. In contrast, this submission suggests either treating sigma^2 as a trainable parameter, or else introducing a more flexible zero-mean mixture-of-Gaussians (MoG) model for the decoder noise. These modeling adaptations are then analyzed using various performance indicators and empirical studies.
The primary issues I have with this work are threefold: (i) The paper is not suitably organized/condensed for an ICLR submission, (ii) the presentation quality is quite low, to the extent that clarity and proper understanding are jeopardized, and (iii) the novelty is limited. Consequently my overall impression is that this work is not yet ready for acceptance to ICLR.
First, regarding the organization, this submission is 19 pages long (*excluding* references and appendices), despite the clear suggestion in the call for papers to limit the length to 8 pages: "There is no strict limit on paper length. However, we strongly recommend keeping the paper at 8 pages, plus 1 page for the references and as many pages as needed in an appendix section (all in a single pdf). The appropriateness of using additional pages over the recommended length will be judged by reviewers." In the present submission, the first 8+ pages contain minimal new material, just various background topics and modified VAE update rules to account for learning noise parameters via basic EM algorithm techniques. There is almost no novelty here. In my mind, this type of well-known content is in no way appropriate justification for such a long paper submission, and it is unreasonable to expect reviewers to wade through it all during a short review cycle.
Secondly, the presentation quality is simply too low for acceptance at a top-tier international conference (e.g., it is full of strange sentences like "Such amelioration facilitates the VAE capable of always reducing the artificial intervention due to more proper guiding of noise learning." While I am sympathetic to the difficulties of technical writing, and realize that at times sufficiently good ideas can transcend local grammatical hiccups, my feeling is that, at least for now, another serious pass of editing is seriously needed. This is especially true given that it can be challenging to digest so many pages of text if the presentation is not relatively smooth.
Third and finally, I do not feel that there is sufficient novelty to overcome the issues already raised above. Simply adapting the VAE decoder noise factors via either a trainable noise parameter or an MoG model represents an incremental contribution as similar techniques are exceedingly common. Of course, the paper also invents some new evaluation metrics and then applies them on benchmark datasets, but this content only appears much later in the paper (well after the soft 8 page limit) and I admittedly did not read it all carefully. But on a superficial level, I do not believe these contributions are sufficient to salvage the paper (although I remain open to hearing arguments to the contrary). |
iclr_2018_rkhlb8lCZ | Published as a conference paper at ICLR 2018 WAVELET POOLING FOR CONVOLUTIONAL NEURAL NETWORKS
Convolutional Neural Networks continuously advance the progress of 2D and 3D image and object classification. The steadfast usage of this algorithm requires constant evaluation and upgrading of foundational concepts to maintain progress. Network regularization techniques typically focus on convolutional layer operations, while leaving pooling layer operations without suitable options. We introduce Wavelet Pooling as another alternative to traditional neighborhood pooling. This method decomposes features into a second level decomposition, and discards the first-level subbands to reduce feature dimensions. This method addresses the overfitting problem encountered by max pooling, while reducing features in a more structurally compact manner than pooling via neighborhood regions. Experimental results on four benchmark classification datasets demonstrate our proposed method outperforms or performs comparatively with methods like max, mean, mixed, and stochastic pooling. | The paper proposes "wavelet pooling" as an alternative for traditional subsampling methods, e.g. max/average/global pooling, etc., within convolutional neural networks.
Experiments on the MNIST, CIFAR-10, SHVN and KDEF datasets, shows the proposed wavelet-based method has
competitive performance with existing methods while still being able to address the overfitting behavior of max pooling.
Strong points
- The method is sound and well motivated.
- The proposes method achieves competitive performance.
Weak points
- No information about added computational costs is given.
- Experiments are conducted in relatively low-scale datasets.
Overall the method is well presented and properly motivated. The paper as a good flow and is easy to follow. The authors effectively demonstrate with few toy examples the weaknesses of traditional methods, i.e max pooling and average pooling. Moreover, their extended evaluation on several datasets show the performance of the proposed method in different scenarios.
My main concerns with the manuscript are the following.
Compared to traditional methods, the proposed methods seems to require higher computation costs. In a deep neural network setting where operations are conducted a large number of times, this is a of importance. However, no indication is given on what are the added computation costs of the proposed method and how that compares to existing methods. A comparison on that regard would strengthen the paper.
In many of the experiments, the manuscript stresses the overfitting behavior of max pooling. This makes me wonder whether this is caused by the fact that experiments are conducted or relatively smaller datasets. While the currently tested datasets are a good indication of the performance of the proposed method, an evaluation on a large scale scenario, e.g. ILSVRC'12, could solidify the message sent by this manuscript. Moreover, it would increase the relevance of this work in the computer vision community.
Finally, related to the presentation, I would recommend presenting the plots, i.e. Fig. 8,10,12,14, for the training and validation image subsets in two separate plots. Currently, results for training and validation sets are mixed in the same plot, and due to the clutter it is not possible to see the trends clearly.
Similarly, I would recommend referring to the Tables added in the paper when discussing the performance of the proposed method w.r.t. traditional alternatives.
I encourage the authors to address my concerns in their rebuttal |
iclr_2018_SJi9WOeRb | Published as a conference paper at ICLR 2018 GRADIENT ESTIMATORS FOR IMPLICIT MODELS
Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research. Some examples include data simulators that are widely used in engineering and scientific research, generative adversarial networks (GANs) for image synthesis, and hot-off-the-press approximate inference techniques relying on implicit distributions. The majority of existing approaches to learning implicit models rely on approximating the intractable distribution or optimisation objective for gradient-based optimisation, which is liable to produce inaccurate updates and thus poor models. This paper alleviates the need for such approximations by proposing the Stein gradient estimator, which directly estimates the score function of the implicitly defined distribution. The efficacy of the proposed estimator is empirically demonstrated by examples that include gradient-free MCMC, meta-learning for approximate inference and entropy regularised GANs that provide improved sample diversity. | Post rebuttal phase (see below for original comments)
================================================================================
I thank the authors for revising the manuscript. The methods makes sense now, and I think its quite interesting. While I do have some concerns (e.g. choice of eta, batching may not produce a consistent gradient estimator etc.), I think the paper should be accepted. I have revised my score accordingly.
That said, the presentation (esp in Section 2) needs to be improved. The main problem is that many symbols have been used without being defined. e.g. phi, q_phi, \pi, and a few more. While the authors might assume that this is obvious, it can be tricky to a reader - esp. someone like me who is not familiar with GANs. In addition, the derivation of the estimator in Section 3 was also sloppy. There are neater ways to derive this using RKHS theory without doing this on a d' dimensional space.
Revised summary: The authors present a method for estimating the gradient of some training objective for generative models used to sample data, such as GANs. The idea is that this can be used in a training procedure. The idea is based off the Stein's identity, for which the authors propose a kernelized solution. The key insight comes from rewriting the variational lower bound so that we are left with having to compute the gradients w.r.t a random variable and then applying Stein's identity. The authors present applications in Bayesian NNs and GANs.
Summary
================================================================
The authors present a method for estimating the gradient of some training objective
for generative models used to sample data, such as GANs. The idea is that this can be
used in a training procedure. The idea is based off the Stein's identity, for which the
authors propose a kernelized solution. The authors present applications in Bayesian NNs
and GANs.
Detailed Reviews
================================================================
My main concern is what I raised via a comment, for which I have not received a response
as yet. It seems that you want the gradients w.r.t the parameters phi in (3). But the
line immediately after claims that you need the gradients w.r.t the domain of a random
variable z and the subsequent sections focus on the gradients of the log density with
respect to the domain. I am not quite following the connection here.
Also, it doesn't help that many of the symbols on page 2 which elucidates the set up
have not been defined. What are the quantities phi, q, q_phi, epsilon, and pi?
Presentation
- Bottom row in Figure 1 needs to be labeled. I eventually figured that the colors
correspond to the figures above, but a reader is easily confused.
- As someone who is not familiar with BNNs, I found the description in Section 4.2
inadequate.
Some practical concerns:
- The fact that we need to construct a kernel matrix is concerning. Have you tried
batch verstions of these estimator which update the gradients with a few data points?
- How is the parameter \eta chosen in practice? Can you comment on the values that you
used and how it compared to the eigenvalues of the kernel matrix?
Minor
- What is the purpose behind sections 3.1 and 3.2? They don't seem pertinent to the rest
of the exposition. Same goes for section 3.5? I don't see the authors using the
gradient estimators for out-of-sample points?
I am giving an indifferent score mostly because I did not follow most of the details. |
iclr_2018_SJyfrl-0b | Representation learning is one of the foundations of Deep Learning and allowed important improvements on several Machine Learning tasks, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. Several of these methods are based on the SkipGram algorithm, and they usually process a large number of multi-hop neighbors in order to produce the context from which node representations are learned. In this paper, we propose an effective and also efficient method for generating node embeddings in graphs that employs a restricted number of permutations over the immediate neighborhood of a node as context to generate its representation, thus ego-centric representations. We present a thorough evaluation showing that our method outperforms state-of-the-art methods in six different datasets related to the problems of link prediction and node classification, being one to three orders of magnitude faster than baselines when generating node embeddings for very large graphs. | The paper includes the terms first-order proximity ("the concept that connected nodes in a graph should have similar properties") and second-order proximity ("the concept that nodes with similar neighborhoods should have common characteristics"). These are called homophily in social network analysis. It is also known as assortativity in network science literature. The paper states on Page 4: "A trade-off between first and second order proximity can be achieved by changing the parameter k, which simultaneously controls both the sizes of sentences generated and the size of the wind used in the SkipGram algorithm." It is not readily clear why this statement should hold. Also the paper does not include a discussion on how the amount of homophily in the graph affects the results. There are various ways of measuring the level of homophily in a graph. There is simple local consistency, which is % of edges connecting nodes that have the same characteristics at each endpoint. Neville & Jensen's JMLR 2007 paper describes relational auto-correlation, which is Pearson contingency coefficient on the characteristics of endpoints of edges. Park & Barabasi's PNAS 2007 paper describes dyadicity and heterophilicity, which measures connections of nodes with the same characteristics compared to a random model and the connections of nodes with different characteristics compared to a random model.
k ("which simultaneously controls both the sizes of sentences generated and the size of the wind used in the SkipGram algorithm") is a free-parameter in the proposed algorithm. The paper needs an in-depth discussion of the role of k in the results. Currently, no discussion is provided on k except that it was set to 5 for the experiments. From a network science perspective, it makes sense to have k vary per node.
It is also not clear why d = 128 was chosen as the size of the embedding.
From the description of the experimental setup for link prediction, it is not clear if a stratified sample of the entries of the adjacency matrix (i.e., both 0 and 1 entries) where selected.
For the node classification experiments, information on class distribution and homophily levels would be helpful.
In Section 5.1, the paper states: "For highly connected graphs, larger numbers of permutations should be chosen (n in [10, 1000]) to better represent distributions, while for sparser graphs, smaller values can be used (n in [1, 10])." How high is highly connected graphs? How spare is a sparser graph? In general, the paper lacks an in-depth analysis of when the approach works and when it does not. I recommend running experiments on synthetic graphs (such as Barabasi-Albert, Watts-Strogatz, Forest Fire, Kronecker, and/or BTER graphs), systematically changing various characteristics of the graph, and reporting the results.
The faster runtime is interesting but not surprising given the ego-centric nature of the approach. |
iclr_2018_HkGJUXb0- | Workshop track -ICLR 2018 LEARNING EFFICIENT TENSOR REPRESENTATIONS WITH RING STRUCTURE NETWORKS
Tensor train (TT) decomposition is a powerful representation for high-order tensors, which has been successfully applied to various machine learning tasks in recent years. In this paper, we propose a more generalized tensor decomposition with ring structure network by employing circular multilinear products over a sequence of lower-order core tensors, which is termed as TR representation. Several learning algorithms including blockwise ALS with adaptive tensor ranks and SGD with high scalability are presented. Furthermore, the mathematical properties are investigated, which enables us to perform basic algebra operations in a computationally efficiently way by using TR representations. Experimental results on synthetic signals and real-world datasets demonstrate the effectiveness of TR model and the learning algorithms. In particular, we show that the structure information and high-order correlations within a 2D image can be captured efficiently by employing an appropriate tensorization and TR decomposition. | The paper addresses the problem of tensor decomposition which is relevant and interesting. The paper proposes Tensor Ring (TR) decomposition which improves over and bases on the Tensor Train (TT) decomposition method. TT decomposes a tensor in to a sequences of latent tensors where the first and last tensors are a 2D matrices.
The proposed TR method generalizes TT in that the first and last tensors are also 3rd-order tensors instead of 2nd-order. I think such generalization is interesting but the innovation seems to be very limited.
The paper develops three different kinds of solvers for TR decomposition, i.e., SVD, ALS and SGD. All of these are well known methods.
Finally, the paper provides experimental results on synthetic data (3 oscillated functions) and image data (few sampled images). I think the paper could be greatly improved by providing more experiments and ablations to validate the benefits of the proposed methods.
Please refer to below for more comments and questions.
-- The rating has been updated.
Pros:
1. The topic is interesting.
2. The generalization over TT makes sense.
Cons:
1. The writing of the paper could be improved and more clear: the conclusions on inner product and F-norm can be integrated into "Theorem 5". And those "theorems" in section 4 are just some properties from previous definitions; they are not theorems.
2. The property of TR decomposition is that the tensors can be shifted (circular invariance). This is an interesting property and it seems to be the major strength of TR over TT. I think the paper could be significantly improved by providing more applications of this property in both theory and experiments.
3. As the number of latent tensors increase, the ALS method becomes much worse approximation of the original optimization. Any insights or results on the optimization performance vs. the number of latent tensors?
4. Also, the paper mentions Eq. 5 (ALS) is optimized by solving d subproblems alternatively. I think this only contains a single round of optimization. Should ALS be applied repeated (each round solves d problems) until convergence?
5. What is the memory consumption for different solvers?
6. SGD also needs to update at least d times for all d latent tensors. Why is the complexity O(r^3) independent of the parameter d?
7. The ALS is so slow (if looking at the results in section 5.1), which becomes not practical. The experimental part could be improved by providing more results and description about a guidance on how to choose from different solvers.
8. What does "iteration" mean in experimental results such as table 2? Different algorithms have different cost for "each iteration" so comparing that seems not fair. The results could make more sense by providing total time consumptions and time cost per iteration. also applies to table 4.
9. Why is the \epsion in table 3 not consistent? Why not choose \epsion = 9e-4 and \epsilon=2e-15 for tensorization?
10. Also, table 3 could be greatly improved by providing more ablations such as results for (n=16, d=8), (n=4, d=4), etc. That could help readers to better understand the effect of TR.
11. Section 5.3 could be improved by providing a curve (compression vs. error) instead of just providing a table of sampled operating points.
12. The paper mentions the application of image representation but only experiment on 32x32 images. How does the proposed method handle large images? Otherwise, it does not seem to be a practical application.
13. Figure 5: Are the RSE measures computed over the whole CIFAR-10 dataset or the displayed images?
Minor:
- Typo: Page 4 Line 7 "Note that this algorithm use the similar strategy": use -> uses |
iclr_2018_H1DJFybC- | We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of L A T E X. The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitives are like a trace of the set of primitive commands issued by a graphics program. We learn a model that uses program synthesis techniques to recover a graphics program from that trace. These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network and extrapolate drawings. Taken together these results are a step towards agents that induce useful, humanreadable programs from perceptual input. | Summary of paper:
This paper tackles the problem of inferring graphics programs from hand-drawn images by splitting it into two separate tasks:
(1) inferring trace sets (functions to use in the program) and
(2) program synthesis, using the results from (1).
The usefulness of this split is referred to as the trace hypothesis.
(1) is done by training a neural network on data [input = rendered image; output = trace sets] which is generated synthetically. During test time, a trace set is generated using a population-based method which samples and assigns weights to the guesses made by the neural network based on a similarity metric. Generalization to hand-drawn images is ensured by by learning the similarity metric.
(2) is done by feeding the trace set into a program synthesis tool of Solar Lezama. Since this is too slow, the authors design a search policy which proposes a restriction on the program search space, making it faster. The final loss for (2) in equation 3 takes into consideration the time taken to synthesize images in a search space.
---
Quality: The experiments are thorough and it seems to work. The potential limitation is generalization to non-synthetic data.
Clarity: The high level idea is clear however some of the details are not clear.
Originality: This work is one of the first that tackles the problem described.
Significance: There are many ad-hoc choices made in the paper, making it hard to extract an underlying insight that makes things work. Is it the trace hypothesis? Or is it just that trying enough things made this work?
---
Some questions/comments:
- Regarding the trace set inference, the loss function during training and the subsequent use of SMC during test time is pretty unconventional. The use of the likelihood P_{\theta}[T | I] as a proposal, as the paper also acknowledges, is also unconventional. One way to look at this which could make it less unconventional is to pose the training phase as learning the proposal distribution in an amortized way (instead of maximizing likelihood) as, for example, in [1, 2].
- In section 2.1., the paper talks about learning the surrogate likelihood function L_{learned} in order to work well for actual hand drawings. This presumably stems from the problem of mismatch between the distribution of the synthetic data used for training and the actual hand drawings. But then L_{learned} is also learned from synthetic data. What makes this translate to non-synthetic data? Does this translate to non-synthetic data?
- What does "Intersection over Union" in Figure 8 mean?
- The details for 3.1 are not clear. In particular, what does t(\sigma | T) in equation 3 refer to? Time to synthesize all images in \sigma? Why is the concept of Bias-optimality important?
- It seems from Table 4 that by design, the learned policy for the program search space already limits the search space to programs with maximum depth of the abstract syntax tree of 3. What is the usual depth of an AST when using Sketch?
---
Minor Comments:
- In page 4, section 2.1: "But pixel-wise distance fares poorly... match the model's renders." and "Pixel-wise distance metrics are sensitive... search space over traces." seem to be saying the same thing
- End of page 5: \citep Polozov & Gulwani (2015)
- Page 6: \citep Solar Lezama (2008)
---
References
[1] Paige, B., & Wood, F. (2016). Inference Networks for Sequential Monte Carlo in Graphical Models. In Proceedings of the 33rd International Conference on Machine Learning, JMLR W&CP 48: 3040-3049.
[2] Le, T. A., Baydin, A. G., & Wood, F. (2017). Inference Compilation and Universal Probabilistic Programming. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (Vol. 54, pp. 1338–1348). Fort Lauderdale, FL, USA: PMLR. |
iclr_2018_B1nxTzbRZ | This paper we present a defogger, a model that learns to predict future hidden information from partial observations. We formulate this model in the context of forward modeling and leverage spatial and sequential constraints and correlations via convolutional neural networks and long short-term memory networks, respectively. We evaluate our approach on a large dataset of human games of StarCraft: Brood War, a real-time strategy video game. Our models consistently beat strong rule-based baselines and qualitatively produce sensible future game states. | The authors introduce the task of "defogging", by which they mean attempting to infer the contents of areas in the game StarCraft hidden by "the fog of war".
The authors train a neural network to solve the defogging task, define several evaluation metrics, and argue that the neural network beats several naive baseline models.
On the positive side, the task is a nice example of reasoning about a complex hidden state space, which is an important problem moving forwards in deep learning.
On the negative side, from what I can tell, the authors don't seem to have introduced any fundamentally new architectural choices in their neural network, so the contribution seems fairly specific to mastering StarCraft, but at the same time, the authors don't evaluate how much their defogger actually contributes to being able to win StarCraft games. All of their evaluation is based on the accuracy of defogging.
Granted, being able to infer hidden states is of course an important problem, but the authors appear to mainly have applied existing techniques to a benchmark that has minimal practical significance outside of being able to win StarCraft competitions, meaning that, at least as the paper is currently framed, the critical evaluation metric would be showing that a defogger helps to win games.
Two ways I could image the contribution being improved are either highlighting and generalizing novel insights gleaned from the process of building the neural network that could help people build "defoggers" for other domains (and spelling out more explicitly what domains the authors expect their insights to generalize to), or doubling down on the StarCraft application specifically and showing that the defogger helps to win games. A minimal version of the second modification would be having a bot that has access to a defogger play against a bot that does not have access to one.
All that said, as a paper on an application of deep learning, the paper appears to be solid, and if the area chairs are looking for that sort of contribution, then the work seems acceptable.
Minor points:
- Is there a benefit to having a model that jointly predicts unit presence and count, rather than having two separate models (e.g., one that feeds into the next)? Could predicting presence or absence separately be a way to encourage sparsity, since absence of a unit is already representable as a count of zero? The choice to have one model seems especially peculiar given the authors say they couldn't get one set of weights that works for both their classification and regression tasks
- Notation: I believe the space U is never described in the main text. What components precisely does an element of U have?
- The authors say they use gameplay from no later than 11 minutes in the game to avoid the difficulties of increasing variance. How long is a typical game? Is this a substantial fraction of the time of the games studied? If it is not, then perhaps the defogger would not help so much at winning.
- The F1 performance increases are somewhat small. The L1 performance gains are bigger, but the authors only compare L1 on true positives. This means they might have very bad error on false positives. (The authors state they are favoring the baseline in this comparison, but it would be nice to have those numbers.)
- I don't understand when the authors say the deep model has better memory than baselines (which includes a perfect memory baseline) |
iclr_2018_By3VrbbAb | Search engine users nowadays heavily depend on query completion and correction to shape their queries. Typically, the completion is done by database lookup which does not understand the context and cannot generalize to prefixes not in the database. In the paper, we propose to use unsupervised deep language models to complete and correct the queries given an arbitrary prefix. We address two main challenges that renders this method practical for large-scale deployment: 1) we propose a method for integrating error correction into the language model completion via an edit-distance potential and a variant of beam search that can exploit such a potential function; and 2) we show how to efficiently perform CPUbased computation to complete the queries, with error correction, in real time (generating top 10 completions within 16 ms). Experiments show that the method substantially increases hit rate over standard approaches, and is capable of handling tail queries. | This paper presents methods for query completion that includes prefix correction, and some engineering details to meet particular latency requirements on a CPU. Regarding the latter methods: what is described in the paper sounds like competent engineering details that those performing such a task for launch in a real service would figure out how to accomplish, and the specific reported details may or may not represent the 'right' way to go about this versus other choices that might be made. The final threshold for 'successful' speedups feels somewhat arbitrary -- why 16ms in particular? In any case, these methods are useful to document, but derive their value mainly from the fact that they allow the use of the completion/correction methods that are the primary contribution of the paper.
While the idea of integrating the spelling error probability into the search for completions is a sound one, the specific details of the model being pursued feel very ad hoc, which diminishes the ultimate impact of these results. Specifically, estimating the log probability to be proportional to the number of edits in the Levenshtein distance is really not the right thing to do at all. Under such an approach, the unedited string receives probability one, which doesn't leave much additional probability mass for the other candidates -- not to mention that the number of possible misspellings would require some aggressive normalization. Even under the assumption that a normalized edit probability is not particularly critical (an issue that was not raised at all in the paper, let alone assessed), the fact is that the assumptions of independent errors and a single substitution cost are grossly invalid in natural language. For example, the probability p_1 of 'pkoe' versus p_2 of 'zoze' as likely versions of 'poke' (as, say, the prefix of pokemon, as in your example) should be such that p_1 >>> p_2, not equal as they are in your model. Probabilistic models of string distance have been common since Ristad and Yianlios in the late 90s, and there are proper probabilistic models that would work with your same dynamic programming algorithm, as well as improved models with some modest state splitting. And even with very simple assumptions some unsupervised training could be used to yield at least a properly normalized model. It may very well end up that your very simple model does as well as a well estimated model, but that is something to establish in your paper, not assume. That such shortcomings are not noted in the paper is troublesome, particularly for a conference like ICLR that is focused on learned models, which this is not. As the primary contribution of the paper is this method for combining correction with completion, this shortcoming in the paper is pretty serious.
Some other comments:
Your presentation of completion cost versus edit cost separation in section 3.3 is not particularly clear, partly since the methods are discussed prior to this point as extension of (possibly corrected) prefixes. In fact, it seems that your completion model also includes extension of words with end point prior to the end of the prefix -- which doesn't match your prior notation, or, frankly, the way in which the experimental results are described.
The notation that you use is a bit sloppy and not everything is introduced in a clear way. For example, the s_0:m notation is introduced before indicating that s_i would be the symbol in the i_th position (which you use in section 3.3). Also, you claim that s_0 is the empty string, but isn't it more correct to model this symbol as the beginning of string symbol? If not, what is the difference between s_0:m and s_1:m? If s_0 is start of string, the s_0:m is of length m+1 not length m.
You spend too much time on common, well-known information, such as the LSTM equations. (you don't need them, but also why number if you never refer to them later?) Also the dynamic programming for Levenshtein is foundational, not required to present that algorithm in detail, unless there is something specific that you need to point out there (which your section 3.3 modification really doesn't require to make that point).
Is there a specific use scenario for the prefix splitting, other than for the evaluation of unseen prefixes? This doesn't strike me as the most effective way to try to assess the seen/unseen distinction, since, as I understand the procedure, you will end up with very common prefixes alongside less common prefixes in your validation set, which doesn't really correspond to true 'unseen' scenarios. I think another way of teasing apart such results would be recommended.
You never explicitly mention what your training loss is in section 5.1.
Overall, while this is an interesting and important problem, and the engineering details are interesting and reasonably well-motivated, the main contribution of the paper is based on a pretty flawed approach to modeling correction probability, which would limit the ultimate applicability of the methods. |
iclr_2018_BJLmN8xRW | Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Few studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and less prone to overfitting. | SUMMARY
This paper addresses the cybersecurity problem of domain generation algorithm (DGA) detection. A class of malware uses algorithms to automatically generate artificial domain names for various purposes, e.g. to generate large numbers of rendezvous points. DGA detection concerns the (automatic) distinction of actual and artificially generated domain names. In this paper, a basic problem formulation and general solution approach is investigated, namely that of treating the detection as a text classification task and to let domain names arrive to the classifier as strings of characters. A set of five deep learning architectures (both CNNs and RNNs) are compared empirical on the text classification task. A domain name data set with two million instances is used for the experiments. The main conclusion is that the different architectures are almost equally accurate and that this prompts a preference of simpler architectures over more complex architectures, since training time and the likelihood for overfitting can potentially be reduced.
COMMENTS
The introduction is well-written, clear, and concise. It describes the studied real-world problem and clarifies the relevance and challenge involved in solving the problem. The introduction provides a clear overview of deep learning architectures that have already been proposed for solving the problem as well as some architectures that could potentially be used. One suggestion for the introduction is that the authors take some of the description of the domain problem and put it into a separate background section to reduce the text the reader has to consume before arriving at the research problem and proposed solution.
The methods section (Section 2) provides a clear description of each of the five architectures along with brief code listings and details about whether any changes or parameter choices were made for the experiment. In the beginning of the section, it is not clarified why, if a 75 character string is encoded as a 128 byte ASCII sequence, the content has to be stored in a 75 x 128 matrix instead of a vector of size 128. This is clarified later but should perhaps be discussed earlier to allow readers from outside the subarea to grasp the approach.
Section 3 describes the experiment settings, the results, and discusses the learned representations and the possible implications of using either the deep architectures or the “baseline” Random Forest classifier. Perhaps, the authors could elaborate a little bit more on why Random Forests were trained on a completely different set of features than the deep architectures? The data is stated to be randomly divided into training (80%), validation (10%), and testing (10%). How many times is this procedure repeated? (That is, how many experimental runs were averaged or was the experiment run once?).
In summary, this is an interesting and well-written paper on a timely topic. The main conclusion is intuitive. Perhaps the conclusion is even regarded as obvious by some but, in my opinion, the result is important since it was obtained from new, rather extensive experiments on a large data set and through the comparison of several existing (earlier proposed) architectures. Since the main conclusion is that simple models should be prioritised over complex ones (due to that their accuracy is very similar), it would have been interesting to get some brief comments on a simplicity comparison of the candidates at the conclusion.
MINOR COMMENTS
Abstract: “Little studies” -> “Few studies”
Table 1: “approach” -> “approaches”
Figure 1: Use the same y-axis scale for all subplots (if possible) to simplify comparison. Also, try to move Figure 1 so that it appears closer to its inline reference in the text.
Section 3: “based their on popularity” -> “based on their popularity” |
iclr_2018_BJRZzFlRb | Published as a conference paper at ICLR 2018 COMPRESSING WORD EMBEDDINGS VIA DEEP COMPOSITIONAL CODE LEARNING
Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ∼ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture. | This paper proposed a new method to compress the space complexity of word embedding vectors by introducing summation composition over a limited number of basis vectors, and representing each embedding as a list of the basis indices. The proposed method can reduce more than 90% memory consumption while keeping original model accuracy in both the sentiment analysis task and the machine translation tasks.
Overall, the paper is well-written. The motivation is clear, the idea and approaches look suitable and the results clearly follow the motivation.
I think it is better to clarify in the paper that the proposed method can reduce only the complexity of the input embedding layer. For example, the model does not guarantee to be able to convert resulting "indices" to actual words (i.e., there are multiple words that have completely same indices, such as rows 4 and 6 in Table 5), and also there is no trivial method to restore the original indices from the composite vector. As a result, the model couldn't be used also as the proxy of the word prediction (softmax) layer, which is another but usually more critical bottleneck of the machine translation task.
For reader's comprehension, it would like to add results about whole memory consumption of each model as well.
Also, although this paper is focused on only the input embeddings, authors should refer some recent papers that tackle to reduce the complexity of the softmax layer. There are also many studies, and citing similar approaches may help readers to comprehend overall region of these studies.
Furthermore, I would like to see two additional analysis. First, if we trained the proposed model with starting from "zero" (e.g., randomly settling each index value), what results are obtained? Second, What kind of information is distributed in each trained basis vector? Are there any common/different things between bases trained by different tasks? |
iclr_2018_BJy0fcgRZ | Understanding how people represent categories is a core problem in cognitive science, with the flexibility of human learning remaining a gold standard to which modern artificial intelligence and machine learning aspire. Decades of psychological research have yielded a variety of formal theories of categories, yet validating these theories with naturalistic stimuli remains a challenge. The problem is that human category representations cannot be directly observed and running informative experiments with naturalistic stimuli such as images requires having a workable representation of these stimuli. Deep neural networks have recently been successful in a range of computer vision tasks and provide a way to represent the features of images. In this paper, we introduce a method for estimating the structure of human categories that draws on ideas from both cognitive science and machine learning, blending human-based algorithms with state-of-theart deep representation learners. We provide qualitative and quantitative results as a proof of concept for the feasibility of the method. Samples drawn from human distributions rival the quality of current state-of-the-art generative models and outperform alternative methods for estimating the structure of human categories. | This paper presents a method based on GANs for visualizing how humans represent visual categories. Authors perform experiments on two datasets: Asian Faces Dataset and ImageNet Large Scale Recognition Challenge dataset.
Positive aspects:
+ The idea of using GANs for this goal is smart and interesting
+ The results seem interesting too
Weaknesses:
- Some aspects of the paper are not clear and presentation needs improvement.
- I miss a clearer results comparison with previous methods, like Vondrick et al. 2015.
Specific comments and questions:
- Figure 1 is not clear. Authors should clarify how they use the inference network and what the two arrows from this inference network represent.
- Figure 2 is also not clear. Just the FLD projections of the MCMCP chains are difficult to interpret. The legend of the figure is too tiny. The right part of the figure should be better described in the text or in the caption, I don't understand well what this illustrates.
- Regarding to the human experiments with AMT: how do the authors deal with noise on the workers performance? Is any qualification task used? What are the instructions given to the workers?
- In section 4.2. the authors state "We also simultaneously learn a corresponding inference network, .... granular human biases captured". This seems interesting but I didn't find any result on that in the paper. Can you give more details or refer to where in the paper it is discussed/tested?
- Figure 4 shows "most interpretable mixture components". How this "most interpretable" were selected?
- In second paragraph Section 4.3, it should be Table 1 instead of Figure 1.
- It would be interesting to see a discussion on why MCMCP Density is better for group 1 and MCMCP Mean is better for group 2. To see the confusion matrixes could be useful.
I like this paper. The addressed problem is challenging and the proposed idea seems interesting. However, the aspects mentioned make me think the paper needs some improvements to be published. |
iclr_2018_Hki-ZlbA- | The ability to deploy neural networks in real-world, safety-critical systems is severely limited by the presence of adversarial examples: slightly perturbed inputs that are misclassified by the network. In recent years, several techniques have been proposed for training networks that are robust to such examples; and each time stronger attacks have been devised, demonstrating the shortcomings of existing defenses. This highlights a key difficulty in designing an effective defense: the inability to assess a network's robustness against future attacks. We propose to address this difficulty through formal verification techniques. We construct ground truths: adversarial examples with a provably-minimal distance from a given input point. We demonstrate how ground truths can serve to assess the effectiveness of attack techniques, by comparing the adversarial examples produced by those attacks to the ground truths; and also of defense techniques, by computing the distance to the ground truths before and after the defense is applied, and measuring the improvement. We use this technique to assess recently suggested attack and defense techniques. | Summary: The paper proposes a method to compute adversarial examples with minimum distance to the original inputs, and to use the method to do two things: Show how well heuristic methods do in finding "optimal/minimal" adversarial examples (how close the come to the minimal change that flips the label) and to assess how a method that is designed to make the model more robust to adversarial examples actually works.
Pros:
I like the idea and the proposed applications. It is certainly highly relevant, both in terms of assessing models for critical use cases as well as a tool to better understand the phenomenon.
Some of the suggested insights in the analysis of defense techniques are interesting.
Cons:
The is not much technical novelty. The method boils down to applying Reluplex (Katz et al. 2017b) in a binary search (although I acknowledge the extension to L1 as distance metric).
The practical application of the method is very limited since the search is very slow and is only feasible at all for relatively small models. State-of-the-art practical models that achieve accuracy rates that make them interesting for deployment in potentially safety critical applications are out of reach for this analysis. The network analysed here does not reach the state-of-the-art on MNIST from almost two decades ago. The analysis also has to be done for each sample. The long runtime does not permit to analyse large amounts of input samples, which makes the analysis in terms of the increase in robustness rather weak. The statement can only be made for the very limited set of tested samples.
It is also unclear whether it is possible to include distance metrics that capture more sophisticated attacks that fool network even under various transformations of the input.
The paper does not consider the more recent and highly relevant Moosavi-Dezfooli et al. “Universal Adversarial Perturbations” CVPR 2017.
The distance metrics that are considered are only L_inf and L1, whereas it would be interesting to see more relevant “perceptual losses” such as those used in style transfer and domain adaptation with GANs.
Minor details:
* I would consider calling them “minimal adversarial samples” instead of “ground-truth”.
* I don’t know if the notation in the Equation in the paragraph describing Carlini & Wagner comes from the original paper, but the inner max would be easier to read as \max_{i \neq t} \{Z(x’)_i \}
* Page 3 “Neural network verification”: I dont agree with the statement that neural networks commonly are trained on “a small set of inputs”.
* Algorithm 1 is essentially only a description of binary search, which should not be necessary.
* What is the timeout for the computation, mentioned in Sec 4?
* Page 7, second paragraph: I wouldn’t say the observation is in line with Carlini & Wagner, because they take a random step, not necessarily one in the direction of the optimum? That’s also the conclusion two paragraphs below, no?
* I don’t fully agree with the conclusion that the defense of Madry does not overfit to the specific method of creating adversarial examples. Those were not created with the CW attack, but are related because CW was used to initialize the search. |
iclr_2018_BybQ7zWCb | Neural Style Transfer has become a popular technique for generating images of distinct artistic styles using convolutional neural networks. This recent success in image style transfer has raised the question of whether similar methods can be leveraged to alter the "style" of musical audio. In this work, we attempt long time-scale high-quality audio transfer and texture synthesis in the time-domain that captures harmonic, rhythmic, and timbral elements related to musical style, using examples that may have different lengths and musical keys. We demonstrate the ability to use randomly initialized convolutional neural networks to transfer these aspects of musical style from one piece onto another using 3 different representations of audio: the log-magnitude of the Short Time Fourier Transform (STFT), the Mel spectrogram, and the Constant-Q Transform spectrogram. We propose using these representations as a way of generating and modifying perceptually significant characteristics of musical audio content. We demonstrate each representation's shortcomings and advantages over others by carefully designing neural network structures that complement the nature of musical audio. Finally, we show that the most compelling "style" transfer examples make use of an ensemble of these representations to help capture the varying desired characteristics of audio signals. | This paper studies style transfer for musical audio, and largely proposes some additions to the framework proposed by Ulyanov and Lebedev. The changes are designed to improve the long-term temporal structure and harmonic matching of the stylized audio. They carry out a few experiments to demonstrate how their proposed approach improves upon the baseline model.
Overall, I don't think this paper provides sufficiently novel or justified contributions compared to the baseline approach of Ulyanov and Lebedev. It largely studies what happens when a different spectrogram representation is used on the input, and when somewhat different network architectures are used. These changes are interesting, but don't provide a lot of additional information which I believe would be interesting to the ICLR community. They seem better suited for an (audio) signal processing venue, or a more informal venue. In addition, the results are not terribly compelling. If the proposed changes (which are heuristically, not theoretically motivated) resulted in huge improvements to sound quality, I might be convinced. More concretely, the results are still very far away from being able to be used in a commercial application (in contrast with image style transfer, whose impressive results were immediately commercially applied). One reason I think the results remain bad is that the audio signal is still fundamentally represented as a phase-invariant representation. Even if you backpropagate through the time-frequency transformation, the transformation itself discards phase, and so various signals (with different phase characteristics) will appear the same after transformation. I believe this contributes the most to the fact that the resulting audio sounds very artifact-ridden and unrealistic. If the paper had been able to overcome this limitation, I might be more convinced, but as-is I don't think it warrants acceptance at ICLR.
Specific comments:
- The description of Ulyanov & Lebedev's algorithm in 3.1 is confusingly structured. For example, the sentence "The L2
distance between the generated audio and the content audio’s feature maps..." is basically a concatenation of roughly 6 thoughts which should be separated into different sentences. The location of the equations (1), (2) do not correspond to where they are introduced in the text. In addition, I don't understand how S and C are generated. It is written that S and C are the "log-magnitude feature maps for style and content". But the "feature maps" X are themselves a log-magnitude time frequency representation (x) convolved with the filterbank. So how are S and C "log-magnitude feature maps"? Surely you aren't computing the log of the output of the filterbank? More equations would be helpful here. Finally, it would be helpful to provide an equation both for G and W instead of just saying that W is analogously defined.
- I don't see any reason to believe that a mel-scaled spectrogram would better capture longer time scales or rhythmic information. Mel-scaling your spectrogram just changes the frequency axis to a mel scale, which makes it somewhat closer to our perception; it does not modify the way time is represented in any way. In fact, in musical machine learning tasks usually swapping between CQT and mel-scaled spectrograms (with a comparable number of frequency bins) has little effect, so I don't see any compelling reason to use one for "rhythm". You need to provide strong empirical or theoretical evidence to back up your claim that this is a principled approach. Instead, I would expect that your change of convolutional structure (to the dilated convolutions, etc) for the "mel spectrogram" branch of your network would account more heavily for stronger modeling of longer timescales.
- You refer to "WaveNet auto-encoders" and cite van den Oord et al. The original wavenet paper did not propose an auto-encoder structure; Engel et al. did.
- "neither representation is capable of representing spatial patterns along the frequency axis" What do you mean by this? Mel or linear-frequency (or CQT) spectrograms exhibit very strong patterns along their frequency axis.
- The method for automatically setting the scales of the different loss terms seems interesting, but I can't find anywhere a description of how you apply each of the beta terms. Are they analogous to the alpha and beta parameters in equation (4)? If so, it appears that gamma is shared across each beta term; this would mean that changing the value of gamma simply changed the scale of all loss terms at once, which would have no effect on optimization.
- "This is entirely possible though the ensemble representation" typo, through -> through
- That instance normalization causes noisy audio is an interesting empirical result, but I'm interested in a principled explanation of why this would happen.
- "the property of having 0 mean and unit variance" - you use this to describe the SeLU nonlinearity. That's not a property of the nonlinearity, it's a property of the activations of different layers when using the nonlinearity (given correct initialization).
- How are the "Inter-Onset Interval Length Distributions" computed? How are you getting the onsets, etc?
- " the maximum cross-correlation value between the time-domain audio waveforms are not significantly affected by the length of this field" - there are many ways effective copying could happen without the time-domain cross-correlation being large. |
iclr_2018_H1wt9x-RW | Under review as a conference paper at ICLR 2018 INTERPRETABLE AND PEDAGOGICAL EXAMPLES
Teachers intentionally pick the most informative examples to show their students. However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable. We show that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies. We evaluate interpretability by (1) measuring the similarity of the teacher's emergent strategies to intuitive strategies in each domain and (2) conducting human experiments to evaluate how effective the teacher's strategies are at teaching humans. We show that the teacher network learns to select or generate interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts. | The authors define a novel method for creating a pair of models, a student and a teacher model, that are co-trained in a manner such that the teacher provides useful examples to the student to communicate a concept that is interpretable to people. They do this by adapting a technique from computational cognitive science called rational pedagogy. Rather than jointly optimize the student and teacher (as done previously), they have form a coupled relation between the student and teacher where each is providing a best response to the other. The authors demonstrate that their method provides interpretable samples for teaching in commonly used psychological domains and conduct human experiments to argue it can be used to teach people in a better manner than random teaching.
Understanding how to make complex models interpretable is an extremely important problem in ML for a number of reasons (e.g., AI ethics, explainable AI). The approach proposed by the authors is an excellent first step in this direction, and they provide a convincing argument for why a previous approach (joint optimization) did not work. It is an interesting approach that builds on computational cognitive science research and the authors provide strong evidence their method creates interpretable examples. They second part of their article, where they test the examples created by their models using behavioral experiments was less convincing. This is because they used the wrong statistical tests for analyzing the studies and it is unclear whether their results would stand with proper tests (I hope they will! – it seems clear that random samples will be harder to learn from eventually, but I also hoped there was a stronger baseline.).
For analysis, the authors use t-tests directly on KL-divergence and accuracy scores; however, this is inappropriate (see Jaeger, 2008; Categorical data analysis: Away from ANOVAs (transformation or not) and towards logit mixed models. Journal of Memory and Language, 59(4), 434-446.). This is especially applicable to the accuracy score results and the authors should reanalyze their data following the paper referenced above. With respect to KL-divergence, a G-test can be used (see https://en.wikipedia.org/wiki/G-test#Relation_to_Kullback.E2.80.93Leibler_divergence). I suspect the results will still be meaningful, but the appropriate analysis is essential to be able to interpret the human results.
Also, a related article: One article testing rational pedagogy in more ML contexts and using it to train ML models that is
Ho, M. K., Littman, M., MacGlashan, J., Cushman, F., & Austerweil, J. L. (NIPS 2016). Showing versus Doing. Teaching by Demonstration.
For future work, it would be nice to show that the technique works for finding interpretable examples in more complex deep learning networks, which motivated the current push for explainable AI in the first place. |
iclr_2018_HymYLebCb | We propose a novel subgraph image representation for classification of network fragments with the target being their parent networks. The graph image representation is based on 2D image embeddings of adjacency matrices. We use this image representation in two modes. First, as the input to a machine learning algorithm. Second, as the input to a pure transfer learner. Our conclusions from multiple datasets are that
• deep learning using structured image features performs the best compared to graph kernel and classical features based methods; and, • pure transfer learning works effectively with minimum interference from the user and is robust against small data.
With the advent of big data, graphical representation of information has gained popularity. Being able to classify graphs has applications in many domains. We ask, "Given a small piece of a parent network, is it possible to identify the nature of the parent network (Figure1)?" We address this problem using structured image representations of graphs.
Adjacency matrices are notoriously bad for machine learning. It is easy to see why, from the unstructured image of a small fragment of a road network, in figure (a) below. Though the road network is structured, the random image would convey little or no information to machine learning algorithms (in the image, a black pixel at position (i, j) corresponds to an edge between nodes i and j). Reordering the vertices (figure (b) below) gives a much more structured image for the same subgraph as in (a). Now, the potential to learn distinguishing properties of the subgraph is evident. We propose to exploit this very observation to solve a basic graph problem (see Figure 1). The datasets mentioned in Figure 1 are discussed in Section 2.4. We stress that both images are lossless representations of the same adjacency matrix. We use the structured image to classify subgraphs in two modes: (i) Deep learning models on the structured image representation as input. (ii) The structured image representation is used as input to a transfer learner (Caffe: see Section 2.3) in a pure transfer learning setting without any change to the Caffe algorithm. Caffe outputs top-k categories that best describe the image. For real world images, these Caffe-descriptions are human friendly as seen in Figure 2a. However, for network-images, Caffe Figure 2: An image of a dog and a structured image of a Facebook graph sample vs their corresponding maximally specific classification vectors returned by Caffe gives a description which doesn't really have intuitive meaning (Figure 2b). We map the Caffedescriptions to vectors. This allows us to compute similarity between network images using the similarity between Caffe description-vectors (see Section 2). | The paper proposed a subgraph image representation and validate it in image classification and transfer learning problems. The image presentation is a minor extension based on a method of producing permutation-invariant adjacency matrix. The experimental results supports the claim.
It is very positive that the figures are very helpful for delivering the information.
The work seems to be a little bit incremental. The proposed image representation is mainly based on a previous work of permutation-invariant adjacency matrix. A novelty of this work seems to be transforming a graph into an image. By the proposed representation, the authors are able to apply image classification methods (supervised or unsupervised) to subgraph classification.
It will be better if the authors could provide more details in the methodology or framework section.
The experiments on 9 networks support the claims that the image embedding approaches with their image representation of the subgraph outperform the graph kernel and classical features based methods. It seem to be promising when using transfer learning.
The last two process figures in 1.1 can be improved. No caption or figure number is provided.
It will be better to make the notations easy to understand and avoid any notation in a sentence without explanation nearby.
For example:
"the test example is correctly classified if and only if its ground truth matches C."(P5)
"We carry out this exercise 4 times and set n to 8, 16, 32 and 64 respectively."(P6)
Some minor issues:
"Zhu et al.(2011) discuss heterogeneous transfer learning where in they use..."(P3)
"Each label vector (a tuple of label, label-probability pairs)." (incomplete sentence?P5) |
iclr_2018_HJRV1ZZAW | State-of-the-art deep reading comprehension models are dominated by recurrent neural nets. Their sequential nature is a natural fit for language, but it also precludes parallelization within an instances and often becomes the bottleneck for deploying such models to latency critical scenarios. This is particularly problematic for longer texts. Here we present a convolutional architecture as an alternative to these recurrent architectures. Using simple dilated convolutional units in place of recurrent ones, we achieve results comparable to the state of the art on two question answering tasks, while at the same time achieving up to two orders of magnitude speedups for question answering. | This paper borrows the idea from dilated CNN and proposes a dilated convolution based module for fast reading comprehension, in order to deal with the processing of very long documents in many reading comprehension tasks. The method part is clear and well-written. The results are fine when the idea is applied to the BiDAF model, but are not very well on the DrQA model.
(1) My biggest concern is about the motivation of the paper:
Firstly, another popular approach to speed up reading comprehension models is hierarchical (coarse-to-fine) processing of passages, where the first step processes sentences independently (which could be parallelized), then the second step makes predictions over the whole passage by taking the sentence processing results. Examples include , "Attention-Based Convolutional Neural Network for Machine Comprehension", "A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data", and "Coarse-to-fine question answering for long documents"
This paper does not compare to the above style of approach empirically, but the hierarchical approach seems to have more advantages and seems a more straightforward solution.
Secondly, many existing works on multiple passage reading comprehension (or open-domain QA as often named in the papers) found that dealing with sentence-level passages could result in better (or on par) results compared with working on the whole documents. Examples include "QUASAR: Datasets for question answering by search and reading", "SearchQA: A new q&a dataset augmented with context from a search engine", and "Reinforced Ranker-Reader for Open-Domain Question Answering". If in many applications the sentence-level processing is already good enough, the motivation of doing speedup over LSTMs seems even waker.
Even on the SQuAD data, the sentence-level processing seems sufficient: as discussed in this paper about Table 5, the author mentioned (at the end of Page 7) that "the Conv DrQA model only encode every 33 tokens in the passage, which shows that such a small context is ENOUGH for most of the questions".
Moreover, the proposed method failed to give any performance boost, but resulted in a big performance drop on the better-performed DrQA system. Together with the above concerns, it makes me doubt the motivation of this work on reading comprehension.
I would agree that the idea of using dilated CNN (w/ residual connections) instead of BiLSTM could be a good solution to many online NLP services like document-level classification tasks. Therefore, the motivation of the paper may make more sense if the proposed method is applied to a different NLP task.
(2) A similar concern about the baselines: the paper did not compare with ANY previous work on speeding up RNNs, e.g. "Training RNNs as Fast as CNNs". The example work and its previous work also accelerated LSTM by several times without significant performance drop on some RC models (including DrQA).
(3) About the speedup: it could be imaged that the speedup from the usage of dilated CNN largely depends on the model architecture. Considering that the DrQA is a better system on both SQuAD and TriviaQA, the speedup on DrQA is thus more important. However, the DrQA has less usage of LSTMs, and in order to cover a large reception field, the dilated CNN version of DrQA has a 2-4 times speedup, but still works much worse. This makes the speedup less impressive.
(4) It seems that this paper was finished in a rush. The experimental results are not well explained and there is not enough analysis of the results.
(5) I do not quite understand the reason for the big performance drop on DrQA. Could you please provide more explanations and intuitions? |
iclr_2018_SkBcLugC- | Ensembling multiple predictions is a widely-used technique to improve the accuracy of various machine learning tasks. In image classification tasks, for example, averaging the predictions for multiple patches extracted from the input image significantly improves accuracy. Using multiple networks trained independently to make predictions improves accuracy further. One obvious drawback of the ensembling technique is its higher execution cost during inference.This higher cost limits the real-world use of ensembling. In this paper, we first describe our insights on relationship between the probability of the prediction and the effect of ensembling with current deep neural networks; ensembling does not help mispredictions for inputs predicted with a high probability, i.e. the output from the softmax. This finding motivates us to develop a new technique called adaptive ensemble prediction, which achieves the benefits of ensembling with much smaller additional execution costs. Hence, we calculate the confidence level of the prediction for each input from the probabilities of the local predictions during the ensembling computation. If the prediction for an input reaches a high enough probability on the basis of the confidence level, we stop ensembling for this input to avoid wasting computation power. We evaluated the adaptive ensembling by using various datasets and showed that it reduces the computation cost significantly while achieving similar accuracy to the naive ensembling. We also showed that our statistically rigorous confidence-level-based termination condition reduces the burden of the task-dependent parameter tuning compared to the naive termination based on the pre-defined threshold in addition to yielding a better accuracy with the same cost. | The authors propose and evaluate an adaptive ensembling threshold using estimated confidence intervals from the t-distribution, rather than a static confidence level threshold. They show it can provide significant improvements in accuracy at the same cost as a naive threshold.
This paper has a nice simple idea at its core, but I don't think it's fully developed. There's a few major conceptual issues going on:
- The authors propose equation (3) as a stopping criterion because "computing CIs for all labels is costly." I don't see how this is true in any sense. The CI computation is literally just averages of a few numbers, which should be way less than the massive matrix multiplies needed to *generate* those numbers in the neural network. Computing pair-wise comparisons naively in O(n^2) time could potentially blow up if the number of output labels is massive, but then you should still be able to keep some running statistics to avoid having to do a quadratic number of comparisons (e.g. the threshold is just the highest bound of any CI you encounter, so you keep track of both the max predicted confidence and max CI so far...then you have your answer in O(n) time.) I think the real issue is that the authors state that the confidence interval computation code is written in Python. That is a huge knock against this paper: When writing a paper about inference time, it's just due diligence to do the most basic inference time optimizations (such as implementing an operation which should be effectively free in a C++ plugin.)
- So by using (3) instead of the original proposed CI comparison that motivated this approach, the authors require that the predicted probability be greater than 1/2 + the CI at the given alpha level. This means that for problems with very large output spaces, getting enough probability mass to get over that 1/2 absolute threshold is potentially going to require a minimum number of evaluations and put a cap on the efficiency gain. This is what we see in Figure 3: for the few points evaluated, when the output space is large (ILSVRC 2012) there is no effective difference between the proposed method and a static threshold of 70%, indicating that the CI of 90% is roughly working out to be the 50% minimum + ~20% threshold from the CI.
- Thus the experiments in this paper don't really add much value in understanding the benefits of this approach as currently written. For due diligence, there should be the following:
1. Show the distribution of computing thresholds from the CI. Then compute, for a CI of 0.8, 0.9, etc., what is the effective threshold on average? Then for every *average threshold* from the CI method, apply that as a static threshold. Then you will get exactly the delta of your method over the static threshold method.
2. Do the same, but using the pairwise CI comparison method.
3. The same again, but now show how effective this is as a function of the size of the output label space. E.g. add these numbers to Table 1 and Table 2 (for every "our adaptive ensemble", put the equivalent static threshold.)
4. Implement the CI computation efficiently if you are going to report actual runtimes. Note that for a paper like this, I don't think the runtimes are as important as the # of evaluations in the ensemble, so this is less important.
- With the above experiments I think this would be a good paper. |
iclr_2018_S1Dh8Tg0- | Published as a conference paper at ICLR 2018 FIX YOUR CLASSIFIER: THE MARGINAL VALUE OF TRAINING THE LAST WEIGHT LAYER
Neural networks are commonly used as models for classification for a wide variety of tasks. Typically, a learned affine transformation is placed at the end of such models, yielding a per-class value used for classification. This classifier can have a vast number of parameters, which grows linearly with the number of possible classes, thus requiring increasingly more resources. In this work we argue that this classifier can be fixed, up to a global scale constant, with little or no loss of accuracy for most tasks, allowing memory and computational benefits. Moreover, we show that by initializing the classifier with a Hadamard matrix we can speed up inference as well. We discuss the implications for current understanding of neural network models. | Revised Review:
The authors have largely addressed my concerns with the revised manuscript. I still have some doubts about the C > N setting (the new settings of C / N of 4 and 2 aren't C >> N, and the associated results aren't detailed clearly in the paper), but I think the paper warrants acceptance.
Original Review:
The paper proposes fixing the classification layers of neural networks, replacing the traditional learned affine transformation with a fixed (e.g., Hadamard) matrix. This is motivated by the observation that classification layers frequently constitute a non-trivial fraction of a network's overall parameter count, compute requirements, and memory usage, and by the observation that removal of pre-classification fully-connected layers has often been found to have minimal impact on performance. Experiments are performed on a range of datasets and network architectures, in both image classification and NLP settings.
First, I'd like to note that the empirical component of this paper is strong: I was impressed by the breadth of architectures and settings covered, and the experiments left me reasonably convinced that the classification layer can often be fixed, at least for image classification tasks, without significant loss of accuracy.
I have two general concerns. For one, removing the fully connected classification layer is not a novel idea; All Convolutional Networks (https://arxiv.org/abs/1412.6806) reported excellent results without an additional fully connected affine transform (just a global average pooling after the last convolutional layer). I think it would be worth at least referencing/discussing differences with this and other all-convolutional architectures. Including a fixed Hadamard matrix for the classification layer is I believe new (although related to an existing literature on using structured matrices in neural networks).
However, I have doubts about the ability of the approach to scale to problems with a larger number of classes, which arguably is a primary motivation of the paper ("parameters ... grow linearly with the number of classes"). Specifically, the idea of using a fixed N x C matrix with C orthogonal columns (such as Hadamard) is only possible when N > C. This is a critical point: in the N > C regime, a final hidden representation with N dimensions can be chosen to achieve *any* C-dimensional output, regardless of the projection matrix used (so long as it is full rank). This makes it seem fairly reasonable to me that the network can (at least approximately, and complicated by the ReLU nonlinearities) fold the "desired" classification layer into the previous layer, especially with a learned scaling and bias term. In fact it's not clear to me that the fixed classification layer accomplishes anything here, beyond projecting from N -> C (i.e., if N = C, I'd guess it could be removed entirely similar to all convolutional nets, as long as the learned scaling and bias were retained).
On the other hand, when C > N, it is not possible to have mutually orthogonal columns, and in general the output is constrained to lie in an N-dimensional subspace of the overall C-dimensional output space. Picking somewhat randomly a *fixed* N-dimensional subspace seems like a bad idea when N << C, since it is unlikely to select a subspace in which it is possible to adequately capture correlations between the different classes. This makes the proposed technique much less appealing for precisely the family of problems where it would be most effective in reducing compute/memory requirements. It also provides (in my view) a clearer explanation for the failure of the approach in the NLP setting. These issues were not discussed anywhere in the text as far as I can tell, and I think it's necessary to at least acknowledge that mutually orthogonal columns can't be chosen when C > N in section 2.2 (and probably include a longer discussion on the probable implications).
Overall, I think the paper provides a useful observation that clearly isn't common knowledge, since classification layers persist in many popular recent architectures. But the notion of fixing or removing the classification layer isn't particularly novel, and I don't believe the proposed technique would scale well to settings with many classes. As is I think the paper falls slightly short. |
iclr_2018_H1MczcgR- | UNDERSTANDING SHORT-HORIZON BIAS IN STOCHASTIC META-OPTIMIZATION
Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the metaobjective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if metaoptimization is to scale to practical neural net training regimes. | This paper studies the issue of truncated backpropagation for meta-optimization. Backpropagation through an optimization process requires unrolling the optimization, which due to computational and memory constraints, is typically restricted or truncated to a smaller number of unrolled steps than we would like.
This paper highlights this problem as a fundamental issue limiting meta-optimization approaches. The authors perform a number of experiments on a toy problem (stochastic quadratics) which is amenable to some theoretical analysis as well as a small fully connected network trained on MNIST.
(side note: I was assigned this paper quite late in the review process, and have not carefully gone through the derivations--specifically Theorems 1 and 2).
The paper is generally clear and well written.
Major comments
-------------------------
I was a bit confused why 1000 SGD+mom steps pre-training steps were needed. As far as I can tell, pre-training is not typically done in the other meta-optimization literature? The authors suggest this is needed because "the dynamics of training are different at the very start compared to later stages", which is a bit vague. Perhaps the authors can expand upon this point?
The conclusion suggests that the difference in greedy vs. fully optimized schedule is due to the curvature (poor scaling) of the objective--but Fig 2. and earlier discussion talked about the noise in the objective as introducing the bias (e.g. from earlier in the paper, "The noise in the problem adds uncertainty to the objective, resulting in failures of greedy schedule"). Which is the real issue, noise or curvature? Would running the problem on quadratics with different condition numbers be insightful?
Minor comments
-------------------------
The stochastic gradient equation in Sec 2.2.2 is missing a subscript: "h_i" instead of "h"
It would be nice to include the loss curve for a fixed learning rate and momentum for the noisy quadratic in Figure 2, just to get a sense of how that compares with the greedy and optimized curves.
It looks like there was an upper bound constraint placed on the optimized learning rate in Figure 2--is that correct? I couldn't find a mention of the constraint in the paper. (the optimized learning rate remains at 0.2 for the first ~60 steps)?
Figure 2 (and elsewhere): I would change 'optimal' to 'optimized' to distinguish it from an optimal curve that might result from an analytic derivation. 'Optimized' makes it more clear that the curve was obtained using an optimization process.
Figure 2: can you change the line style or thickness so that we can see both the red and blue curves for the deterministic case? I assume the red curve is hiding beneath the blue one--but it would be good to see this explicitly.
Figure 4 is fantastic--it succinctly and clearly demonstrates the problem of truncated unrolls. I would add a note in the caption to make it clear that the SMD trajectories are the red curves, e.g.: "SMD trajectories (red) during meta-optimization of initial effective ...". I would also change the caption to use "meta-training losses" instead of "training losses" (I believe those numbers are for the meta-loss, correct?). Finally, I would add a colorbar to indicate numerical values for the different grayscale values.
Some recent references that warrant a mention in the text:
- both of these learn optimizers using longer numbers of unrolled steps:
Learning gradient descent: better generalization and longer horizons, Lv et al, ICML 2017
Learned optimizers that scale and generalize, Wichrowska et al, ICML 2017
- another application of unrolled optimization:
Unrolled generative adversarial networks, Metz et al, ICLR 2017
In the text discussing Figure 4 (middle of pg. 8) , "which is obtained by using..." should be "which are obtained by using..."
In the conclusion, "optimal for deterministic objective" should be "deterministic objectives" |
iclr_2018_S1viikbCW | TCAV: Relative concept importance testing with Linear Concept Activation Vectors
Neural networks commonly offer high utility but remain difficult to interpret. Developing methods to explain their decisions is challenging due to their large size, complex structure, and inscrutable internal representations. This work argues that the language of explanations should be expanded from that of input features (e.g., assigning importance weightings to pixels) to include that of higher-level, humanfriendly concepts. For example, an understandable explanation of why an image classifier outputs the label "zebra" would ideally relate to concepts such as "stripes" rather than a set of particular pixel values. This paper introduces the "concept activation vector" (CAV) which allows quantitative analysis of a concept's relative importance to classification, with a user-provided set of input data examples defining the concept. CAVs may be easily used by non-experts, who need only provide examples, and with CAVs the high-dimensional structure of neural networks turns into an aid to interpretation, rather than an obstacle. Using the domain of image classification as a testing ground, we describe how CAVs may be used to test hypotheses about classifiers and also generate insights into the deficiencies and correlations in training data. CAVs also provide us a directed approach to choose the combinations of neurons to visualize with the DeepDream technique, which traditionally has chosen neurons or linear combinations of neurons at random to visualize. | Summary
---
This paper proposes the use of Concept Activation Vectors (CAVs) for interpreting deep models. It shows how concept activation vectors can be used to provide explanations where the user provides a concept (e.g., red) as a set of training examples and then the method provides explanations like "If there were more red in this image then the model would be more likely to classify it as a fire truck."
Four criteria are enumerated for evaluating interpretability methods:
1. accessibility: ML background should not be required to interpret a model
2. customization: Explanations should be generated w.r.t. user-chosen concepts
3. plug-in readiness: Should be no need to re-train/modify the model under study
4. quantification: Explanations should be quantitative and testable
A Concept Activation Vector is simply the weight vector of a linear classifier trained on some examples (100-500) of a user-provided concept of interest using features extracted from an intermediate network layer. These vectors can be trained in two ways:
1. 1-vs-all: The user provides positive examples of a concept and all other existing training data is treated as negatives
2. 1-vs-1: The user provides sets of positive and negative examples, allowing the negative examples to be targeted to one category
Once a CAV is obtained it is used in two ways:
First, it provides further verification that higher level concepts tend to be "disentangled" in deeper network layers while low level concepts are "disentangled" earlier in the network. This work shows that linear classifier accuracy increases significantly using deeper features for higher level concepts but it only increases marginally (or even decreases) when modeling lower level concepts.
Second, and this is the main point of the paper, relative importance of concepts w.r.t. a particular task can be evaluated. Suppose an image (e.g., of a zebra) produces a feature vector f_l at layer l and v_l is a concept vector learned to classify the presence of stripes from layer l features. Then the probability the model assigns to the zebra class can be evaluated using features f_l and then f_l + v^c_l. If the latter probability is greater then adding stripes will increase the model's confidence in the zebra class. Furthermore, the method goes on to measure how often stripes increase zebra confidence across all images. Rather than explaining the network's decision for a particular image, this average metric measures the global importance of the stripes concept for zebra. The paper reports examples of the relative importance of certain concepts with respect to others in figure 5.
Pros
---
The paper proposes a simple and novel idea which could have a major impact on how deep networks are explained. At a high level the novelty comes from replacing the gradient (or something similar) used in saliency methods with a directional derivative. Users can align the direction to any concept they find relevant, so the concept space used to explain a prediction is no longer fixed a-priori (e.g. to pixels in the input space). It can adapt to user suspicions and expectations.
Cons
---
Concerns about story/presentation:
* The second use of CAVs, to test relative importance of concepts, is basically an improved saliency method. It's advantages over other saliency methods are stated clearly in 2.1, but it should not be portrayed as fundamentally different.
The two quantities in eq. 1 can be thought of in terms of directional derivatives. To compute I_w^up start by computing a finite differences approximation of directional derivative of the linear classifier probability p_k(y) with respect to layer l features in the direction of the CAV v_C^l. Call this quantity g_i (for the ith example). Then I_w^up is the average of 1(g_i > 0) over all examples. I think the notion of relative importance used here is basically the idea of a directional derivative.
This doesn't change the contribution of the paper but it should be mentioned and section 2.1 should be changed so it doesn't suggest this method is fundamentally different than saliency methods in terms of criteria 4.
* Evaluation and Desiderata 4: The fourth criteria for interpretability laid out by the paper says an explanation should be quantitative and testable. I'm not sure exactly what this is supposed to mean. I see two ways to interpret the quantitative criterion.
One way to interpret the "quantifiability" criterion is to say that it requires explanations to be presented as numeric values. But most methods do this. In particular, saliency methods report results in terms of pixel brightness (that is a numeric quantity) even though humans may not know how to interpret that correctly. I do not think this is what was intended, so my second option is to say that the criterion requires an explanation be judged good or bad according to some quantitative metric. But this paper provides no such metric. The explanations in figure 5 are not presented as good or bad according to any metric.
While it is significant that the method meets the first 3 criteria, these do not establish the fidelity of the method. Do humans generalize these explanations to valid inferences about model behavior? Maybe consider some evaluation options from section 3 of Doshi-Velez and Kim 2017 (cited in the paper).
* Section 4.1.1: "This experiment does not yet show that these concept activation vectors align with the concepts that makes sense semantically to humans."
Isn't test set accuracy a better measure of alignment with the human concept than the visualizations? Given a choice between a concept vector which produced good test accuracy and poor visualizations and another concept vector which produced poor test accuracy and good visualizations I would think the one with good test accuracy is better aligned to the human concept. I would still prefer a concept vector which satisfies both.
* Contrary to the description in section 2.2, I think DeepDream optimizes a natural image (non-random initialization) rather than starting from a random image. It looks like these visualization start from a random initialization. Which method is used? Maybe cite this paper, which gives a nice overview: "Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks" by Nguyen et. al. in the Visualization for Deep Learning workshop at ICML 2016
* In section 4.1.3 I'm not quite sure what the point is. Please state it more clearly. Is the context class the same as the negative set used to train the classifier? Why should it be different/harder to sort corgi examples according to a concept vector as opposed to sorting all examples according to a concept vector? This seems like a useful way of testing to be sure CAV's represent human concepts, but I'm not sure what context concepts like striped/CEO provide.
* Relative vs absolute importance and user choice: Section 4.2 claims that figure 5 shows that a CAV "captures an important aspect of the prediction." I would be a bit more careful about the distinction between relative and absolute here. If red makes images more probably fire trucks then it doesn't necessarily mean that red is important for the fire truck concept in an absolute sense. Can we be sure that there aren't other concepts which more dramatically affect outputs? What if a user makes a mistake and only requests explanations with respect to concepts that are irrelevant to the class being explained? Do we need to instruct users on how to best interpret the explanation?
* How practical is this method? Is it a significant burden for users to provide 100-500 images per concept? Are the top 100 or so images from a search engine good enough to specify a CAV?
Minor missing experimental settings and details:
* Section 3 talks about a CAV defined with respect to a non-generic set D of negative examples. Is this setting ever used in the experiments or is the negative set always the same? How does specifying a narrow set of negatives change the CAV for concept C?
* I assume the linear classifier is a logistic regressor, but this is never stated.
* TCAV measures importance/influence as an average over a dataset. This is a validation set, right? For how many of these images are both the user concept and target concept unrelated to the image content (e.g., stripes and zebra for an image of a truck)? When that happens is it reasonable to expect meaningful explanations? They may not be meaningful because the data distribution used to train the CAV probably does not even sparsely cover all concepts in the network's train set. (related to "reference points" in "The (Un)reliability of Saliency Methods" submitted to ICLR18)
* For relative importance testing it would be nice to see a note about the step size selection (1.0) and experiments that show the effect of different step sizes. Hopefully influence is monotonic in step size so that different step sizes do not significantly change the results.
* How large is the typical difference between p_k(y) and p_k(y_w) in eq. 1? If this difference is small then is it meaningful? Are small differences signal or noise?
Final Evaluation
---
I would like to see this idea published, but not in its current form. The method meets a relevant set of criteria that no other method seems to meet, but arguments set forth in the story need some revision and the empirical evaluation needs improvement, especially with respect to model fidelity. I would be happy to change my rating if the above points are addressed. |
iclr_2018_rk1FQA0pW | Deep neural networks (DNN) have shown promising performance in computer vision. In medical imaging, encouraging results have been achieved with deep learning for applications such as segmentation, lesion detection and classification. Nearly all of the deep learning based image analysis methods work on reconstructed images, which are obtained from original acquisitions via solving inverse problems (reconstruction). The reconstruction algorithms are designed for human observers, but not necessarily optimized for DNNs which can often observe features that are incomprehensible for human eyes. Hence, it is desirable to train the DNNs directly from the original data which lie in a different domain with the images. In this paper, we proposed an end-to-end DNN for abnormality detection in medical imaging. To align the acquisition with the annotations made by radiologists in the image domain, a DNN was built as the unrolled version of iterative reconstruction algorithms to map the acquisitions to images, and followed by a 3D convolutional neural network (CNN) to detect the abnormality in the reconstructed images. The two networks were trained jointly in order to optimize the entire DNN for the detection task from the original acquisitions. The DNN was implemented for lung nodule detection in low-dose chest computed tomography (CT), where a numerical simulation was done to generate acquisitions from 1,018 chest CT images with radiologists' annotations. The proposed end-to-end DNN demonstrated better sensitivity and accuracy for the task compared to a two-step approach, in which the reconstruction and detection DNNs were trained separately. A significant reduction of false positive rate on suspicious lesions were observed, which is crucial for the known over-diagnosis in low-dose lung CT imaging. The images reconstructed by the proposed end-to-end network also presented enhanced details in the region of interest. | This paper proposes to jointly model computed tomography reconstruction and lesion detection in the lung, training the mapping from raw sinogram to detection outputs in an end-to-end manner. In practice, such a mapping is computed separately, without regard to the task for wich the data is to be used. Because such a mapping loses information, optimizing such a mapping jointly with the task should preserve more information that is relevant to the task. Thus, using raw medical image data should be useful for lesion detection in CT as well as most other medical image analysis tasks.
Style considerations:
The work is adequately motivated and the writing is generally clear. However, some phrases are awkward and unclear and there are occasional minor grammar errors. It would be useful to ask a native English speaker to polish these up, if possible. Also, there are numerous typos that could nonetheless be easily remedied with some final proofreading. Generally, the work is well articulated with sound structure but needs polish.
A few other minor style points to address:
- "g" is used throughout the paper for two different networks and also to define gradients - if would be more clear if you would choose other letters.
- S3.3, p. 7 : reusing term "iteration"; clarify
- fig 10: label the columns in the figure, not in the description
- fig 11: label the columns in the figure with iterations
- fig 8 not referenced in text
Questions:
1. Before fine-tuning, were the reconstruction and detection networks trained end-to-end (with both L2 loss and cross-entropy loss) or were they trained separately and then joined during fine-tuning?
(If it is the former and not the latter, please make that more clear in the text. I expect that it was indeed the former; in case that it was not, I would expect fully end-to-end training in the revision.)
2. Please confirm: during the fine-tuning phase of training, did you use only the cross-entropy loss and not the L2 loss?
3a. From equation 3 to equation 4 (on an iteration of reconstruction), the network g() was dropped. It appears to replace the diagonal of a Hessian (of R) which is probably a conditioning term. Have you tried training a g() network? Please discuss the ramifications of removing this term.
3b. Have you tracked the condition number of the Jacobian of f() across iterations? This should be like tracking the condition number of the Hessian of R(x).
4. Please discuss: is it better to replace operations on R() with neural networks rather than to replace R()? Why?
5. On page 5, you write "masks for lung regions were pre-calculated". Were these masks manual segmentations or created with an automated method?
6. Why was detection only targetted on "non-small nodules"? Have you tried detecting small nodules?
7. On page 11, you state: "The tissues in lung had much better contrast in the end-to-end network compared to that in the two-step network". I don't see evidence to support that claim. Could you demonstrate that?
8. On page 12, relating to figure 11, you state:
"Whereas both methods kept similar structural component, the end-to-end method had more focus on the edges and tissues inside lung compared to the two-step method. As observed in figure 11(b), the structures of the lung tissue were much more clearer in the end-to-end networks. This observation indicated that sharper edge and structures were of more importance for the detection network than the noise level in the reconstructed images, which is in accordance with human perceptions when radiologists perform the same task."
However, while these claims appear intuitive and such results may be expected, they are not backed up by figure 11. Looking at the feature map samples in this figure, I could not identify whether they came from different populations. I do not see the evidence for "more focus on the edges and tissues inside lung" for the end-to-end method in fig 11. It is also not obvious whether indeed "the structures of the lung tissue were much more clearer" for the end-to-end method, in fig 11. Can you clarify the evidence in support of these claims?
Other points to address:
1. Please report statistical significance for your results (eg. in fig 5b, in the text, etc.). Also, please include confidence intervals in table 2.
2. Although cross-entropy values, detection metrics were not (except for the ROC curve with false positives and false negatives). Please compute: accuracy, precision, and recall to more clearly evaluate detection performance.
3a. "Abnormality detection" implies the detection of anything that is unusual in the data. The method you present targets a very specific abnormality (lesions). I would suggest changing "abnormality detection" to "lesion detection".
3b. The title should also be updated accordingly. Considering also that the presented work is on a single task (lesion detection) and a single medical imaging modality (CT), the current title appears overly broad. I would suggest changing it from "End-to-End Abnormality Detection in Medical Imaging" -- possibly to something like "End-to-End Computed Tomography for Lesion Detection".
Conclusion:
The motivation of this work is valid and deserves attention. The implementation details for modeling reconstruction are also valuable. It is interesting to see improvement in lesion detection when training end-to-end from raw sinogram data. However, while lung lesion detection is the only task on which the utility of this method is evaluated, detection improvement appears modest. This work would benefit from additional experimental results or improved analysis and discussion. |
iclr_2018_B14uJzW0b | Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this. A key question in optimization is to understand when the optimization landscape of a neural network is amenable to gradient-based optimization. We focus on a simple neural network: RELU network with one hidden layer consisting of two RELU units, and show that all local minimizers are global. This combined with recent work of Lee et al. (2017); show that gradient descent converges to the global minimizer. | In this paper the authors studied the theoretical properties of manifold descent approaches in a standard regression problem, whose regressor is a simple neural network. Leveraged by two recent results in global optimization, they showed that with a simple two-layer ReLU network with two hidden units, the problem with a standard MSE population loss function does not have spurious local minimum points. Based on the results by Lee et al, which shows that first order methods converge to local minimum solution (instead of saddle points), it can be concluded that the global minima of this problem can be found by any manifold descent techniques, including standard gradient descent methods. In general I found this paper clearly written and technically sound. I also appreciate the effort of developing theoretical results for deep learning, even though the current results are restrictive to very simple NN architectures.
Contribution:
As discussed in the literature review section, apart from previous results that studied the theoretical convergence properties for problems that involves a single hidden unit NN, this paper extends the convergence results to problems that involves NN with two hidden units. The analysis becomes considerably more complicated, and the contribution seems to be novel and significant. I am not sure why did the authors mentioned the work on over-parameterization though. It doesn't seem to be relevant to the results of this paper (because the NN architecture proposed in this paper is rather small).
Comments on the Assumptions:
- Please explain the motivation behind the standard Gaussian assumption of the input vector x.
- Please also provide more motivations regarding the assumption of the orthogonality of weights: w_1^\top w_2=0 (or the acute angle assumption in Section 6).
Without extra justifications, it seems that the theoretical result only holds for an artificial problem setting. While the ReLU activation is very common in NN architecture, without more motivations I am not sure what are the impacts of these results.
General Comment:
The technical section is quite lengthy, and unfortunately I am not available to go over every single detail of the proofs. From the analysis in the main paper, I believe the theoretical contribution is correct and sound. While I appreciate the technical contributions, in order to improve the readability of this paper, it would be great to see more motivations of the problem studied in this paper (even with simple examples). Furthermore, it is important to discuss the technical assumptions on the 1) standard Gaussianity of the input vector, and 2) the orthogonality of the weights (and the acute angle assumption in Section 6) on top of the discussions in Section 8.1, as they are critical to the derivations of the main theorems. |